Return to site

SAMPLINGE XACTLY THE GHZ DISTRIBUTIONIN THE RANDOM BIT MODEL

Juki Sewing Machine Dealers In Chennai

Any von Neumann measurement on a single qubit canbe conveniently represented by a point on the surface of athree-dimensional sphere, known as the Bloch sphere, whosespherical coordinates can be specified by anazimuthalangleθ∈[0,2π)and anelevationangleφ∈[−π/2,π/2].Theseparameters define operatorM=xσ1+yσ2+zσ3=(sinφeıθcosφeıθcosφsinφ),wherex=cosθcosφ,y=sinθcosφ,z=sinφ,andσ1,σ2andσ3are the Pauli operators. In turn, this operator definesa measurement in the usual way, which we shall also denoteMfor convenience, whose outcome is one of its eigenvalues+1or1. A von Neumann measurement is said to beequatorialwhen its elevation angleφ=0 vanishes and it issaid to bein the computational basiswhenφπ/2.Consider a set ofnvon Neumann single-qubit measure-mentsMj, represented by their parametersjj),where1jn. This set of operators defines a joint measurement M=⊗nj=1Mj. In turn, this measurement defines a probabilitydistributionpon the set{−1,+1}n, which corresponds to theprobability of all possible outcomes when then-partite GHZstate|n〉=12|0n〉+12|1nis measured according toM.WecallittheGHZ distribution.Following Refs. [17], [18], we show in Appendix A that theprobabilityp(b)of obtainingb=(b1,...,bn)in{−1,+1}ncan be decomposed asp(b)=cos2(θ2)p0(b)+sin2(θ2)p1(b)(1)whereθ=nj=1θjp0(b)=12(a0(b)+a1(b))2p1(b)=12(a0(b)a1(b))2}(2)a0(b)=nj=1cos(12(φjπ2bj))anda1(b)=(1)nnj=1sin(12(φjπ2bj)).}(3)Hence, we see that distributionpis a convex combinationof sub-distributionsp0andp1, whose coefficients cos2(θ/2)and sin2(θ/2)depend only on the azimuthal angles of themeasurements, whereas the sub-distributions depend only ontheir elevation angles.Samplingpis therefore a matter of sampling a Bernoullidistribution with defining parameter cos2(θ/2)before samp-ling eitherp0orp1, whichever is the case. https://vssewingmachine.in/ As we shallsee, full knowledge of the parameters is not required tosamplep exactly. We shall see in Section II-A how tosample a Bernoulli distribution with an arbitraryp∈[0,1]as parameter (not the samepas our probability distributionfor GHZ) using a sequence of approximants convergingtopand using an expected number of only five unbiasedidentically independently distributed (i.i.d.) random bits.Subsequently, we shall see in Section II-B how to samplep0(orp1) by modifying von Neumann’s rejection algorithm in away that it uses sequences of approximants and unbiased i.i.d.random bits. For simulating exactly the GHZ distribution,an expected number of 6n+17 perfect random bits issufficient. For simplicity, we shall henceforth systematicallyignore zero-probability events in our analyses.A. Sampling a Bernoulli DistributionAssume that only a random bit generator is available tosample a given probability distribution and that the parametersthat specify this distribution are only accessible as follows:we can ask for any number of bits of each parameter, butwill be charged one unit of cost per bit that is revealed.We shall also be charged for each random bit requested fromthe generator.To warm up to this conundrum, consider the problem ofgenerating a Bernoulli random variableYwith parameterp∈[0,1], meaning thatY=0 with probabilitypandY=1otherwise. IfU=0.U1U2...is the binary expansion of auniform[0,1]random variable, i.e.U1,U2,...is our sourceof unbiased independent random bits, and ifp=0.p1p2...isthe binary expansion ofp(in casep=1, we can proceed as ifit were 0.p1p2...with eachpi=1), we compare bitsUiandpifori=1,2,...until for the first timeUi=pi. Then, ifUi=0<pi=1, we returnY=0, and ifUi=1>pi=0,we returnY=1. It is clear thatY=0 if and only ifU<p, which happens with probabilityp. Therefore,Yisindeed Bernoulli(p). The expected number of bits requiredfrompis precisely 2. The expected number of bits neededfrom our random bit source is also 2.Now, suppose that the parameterpdefining our Bernoullidistribution is given byp=cos2(θ/2), as in the case of ourdecomposition of the GHZ distribution. None of the partiescan knowθprecisely since it is distributed as a sum ofθj’s,each of which is known only by one individual party. If wecould obtain as many bits in the binary expansion ofpasneeded (although the expected number of required bits is aslittle as 2), we would use the idea given above in order tosample according to this Bernoullidistribution. However, it isnot possible in general to know even the first bit ofpgiven anyfixed number of bits of theθj’s. For instance, ifθis arbitrarilyclose toπ/2, we need arbitrarily many bits of precision aboutit before we can tell if the first bit in the binary expansion ofcos2(θ/2)is 0 or 1. Nevertheless, we can useapproximationsofp, rather thantruncations, which in turn can come fromapproximations (in particular truncations) of theθj’s.Definition 1: A k-bitapproximationof a real number x isanyˆx such that|x−ˆx|≤2k. A special case of k-bitapproximation is the k-bittruncationˆx=sign(x) |x|2k/2k,wheresign(x)is equal to+1,0or1depending on the signof x . Note that the value of k corresponds to the number ofbits in the fractional part, without limitation on the size of theinteger part, and that it does not take account of the sign incase it has to be transmitted too.We postpone to Section III-B the detail of how theseapproximations can be obtained in a distributed setting. Fornow, let us assume that we can obtain approximationp[k]so that|p[k]−p|≤2kfor any desired precisionk. Then,settingU[k]=0.U1...Ukso thatU[k]≤U<U[k]+2k,we have thatUpifU[k]≥p[k]+2kwhereasU<pifU[k]+2kp[k]−2k, hence ifU[k]≤p[k]−2·2k.Thus, one can check ifU<pby generating only a finite num-berofbitsofUand increasingly good approximations ofp.These ideas are formalized in Algorithm 1. It is elementary toverify that theYgenerated by this algorithm is Bernoulli(p),again becauseP{U<p}=pifUis a continuous uniformrandom variable on[0,1].The number of iterations before Algorithm 1 returns a value,which is also its required number of independent unbiasedrandom bits, is a random variable, sayK. We have seen abovethatE{K}, the expected value ofK, would be exactly 2 if wecould generate arbitrarily precise truncations ofp. But sincewe can only obtain arbitrarily preciseapproximationsinstead,which is why we needed Algorithm 1 in the first place, weshall have to pay the price of a small increase inE{K}.P{K>k}≤P{|U[k]−p[k]| ≤22k}P{|Up|≤42k}82k.

Therefore,E{K}=k=0P{K>k}≤k=0min(1,82k)=5.B. Sampling p0(or p1) in the Random Bit ModelHere, we concentrate on samplingp0sincep1can besampledinthesameway,mutatis mutandis. Let us defineαj=cos(12(φjπ2))=sin(12(φj+π2))andβj=cos(12(φj+π2))=−sin(12(φjπ2)).}(4)Clearly,α2j+β2j=1. Now, considernRademacher2randomvariablesBjthat take value1 with probabilityβ2jand+1with complementary probabilityα2j. The random vector withindependent components given by(B1,...,Bn)is distributedaccording toq0(b)def=jFbβ2jjGbα2j=a20(b),whereFb={j|bj=−1}andGb={j|bj=+1}for allb=(b1,...,bn)∈{−1,+1}n,anda0is given in (3). Simi-larly, the random vector with independent components givenby(B1,...,Bn)is distributed according toq1(b)def=jFbα2jjGbβ2j=a21(b).The key observation is that bothq0andq1can be sampledwithout any needs for communication because each partyjknows his own parametersα2jandβ2j, which is sufficientto draw independently local Rademacher random variableBjorBj. Moreover, a single unbiased independent ran-dom bitSdrawn by a designated party suffices to samplecollectively from distributionq=q0+q12, provided this bit istransmitted to all parties: everybody samples according toq0ifS=0ortoq1ifS=1. Furthermore, it follows from (2)2A Rademacher random variable is just like a Bernoulli, except that it takesvalue+1or1, rather than 0 or 1.Algorithm 2Samplingp0Using von Neumann’s Algorithm1:repeat2:Generate independent Rademacher random variablesB1,...,Bnwith parametersα21,...,α2n3:Generate an unbiased independent random bitS4:ifS=1then5:setB(B1,...,Bn)6:else7:setB(B1,...,Bn)8:end if{Random variableBis now sampled according toq}9:GenerateUuniformly on[0,1]10:until2Uq(B)p0(B)thatp0(b)+p1(b)=a20(b)+a21(b)=q0(b)+q1(b)for everyb∈{−1,+1}n, and thereforep0(b)q0(b)+q1(b)=2q(b).The relevance of all these observations is that we can applyvon Neumann’s rejection algorithm [19] to samplep0sinceit is bounded by the small constant 2 that multiplies easy-to-draw probability distributionq. For the moment, we assumeonce again the availabilityof a continuous uniform randomgenerator, which we shall later replace by a source of unbiasedindependent random bits. We also assume for the moment thatwe can compute theαj’s,q(b)andp0(b)exactly. This givesrise to Algorithm 2.By the general principle of von Neumann’s rejection algo-rithm, probability distributionp0is successfully sampled afteran expected number of 2 iterations round the loop becausep0(b)2q(b)for allb∈{−1,+1}n. Within one iteration,two expected independent unbiased random bits suffice togenerate each of thenRademacher random variables by aprocess similar to what is explained in the second paragraphof Section II-A. Hence an expected total of 2n+1 randombits are needed each time round the loop for an expectedgrand total of 4n+2 bits to samplep0. But of course, thisdoes not take account of the (apparent) need to generate thecontinuous uniform[0,1]random variableU. It follows thatthe expected total amount of work required by Algorithm 2isO(n), provided we count infinite real arithmetic at unit cost.Furthermore, the time taken by this algorithm, divided byn,is stochastically smaller than a geometric random variable withconstant mean, so its tail is exponentially decreasing.Now, we modify and adapt this algorithm to eliminate theneed for the continuous uniformU(and hence its generation),which is not allowed in the randombitmodel. Furthermore,we eliminate the need for infinite real arithmetic and for theexact values ofq(B)andp0(B), which would be impossibleto obtain in our distributed setting since the parameters neededto compute these values are scattered among all parties, andreplace them with approximations—we postpone to Section IIIthe issue of how these approximations can be computed.(On the other hand, arbitrarily precise values of theαj’sareavailable to generate independent Rademacher randomvariables with these parameters because each party will beindividually responsible to generate his own Rademachervariable.)

In each iteration of Algorithm 2, we generate a randomvariableBaccording to distributionq(lines 2 to 8) and itappears that we need the exact values ofq(B)andp0(B),aswell as a uniform real random variableU, in order to decideon the last line whether to accept thisBor reject it and startall over again. However, testing if 2Uq(B)p0(B)can beachieved with finite precision on all these parameters. For thispurpose, we adapt the method developed for Algorithm 1 togenerate a Bernoulli random variableYequal to 1 with proba-bility exactly equal top0(B)/2q(B).Again, we denote byU[k]thek-bit truncation ofU,sothatU[k]≤UU[k]+2k.Furthermore,weuseLk(Lforleft)andRk(Rforright) to denotek-bit approximations of 2q(B)andp0(B), respectively, so that|Lk2q(B)|≤2kand|Rkp0(B)|≤2k. Then, we useεkto denote the real numberin interval[−1,1]such that|2Uq(B)U[k]Lk|=U(Lk+εk2k)U[k]Lk=(UU[k])Lk+Uεk2kLk2k+U2k32k.Furthermore, becauseRkis ak-bit approximation ofp0(B),|Rkp0(B)|≤12k.Thus, we know thatY=1 wheneverU[k]Lk+3/2k<Rk1/2k,whereasY=0 wheneverU[k]Lk3/2k>Rk+1/2k.Otherwise, we are in the uncertainty zone and we need morebits ofU,q(B)andp0(B)before we can decide on the valueofY. This is formalized in Algorithm 3.It follows from the above discussion that this algorithmcan serve to sample Bernoulli random variableY, in effectreplacing line 9 in Algorithm 2 and providing the terminatingcondition “untilY=1” as its line 10. But how many itera-tions does Algorithm 3 require? LetKbe a random variablecorresponding to the value ofkupon exitingfrom theloopin this algorithm, which is the number of times round theloop and hence the number of bits needed fromUand theprecision in 2q(B)andp0(B)required in order to sampleexactly Bernoulli random variableY. We are interested in anupper bound onE{K}, the expected value ofK.If the algorithm has not yet halted after having processedU[k],LkandRk, it is because|U[k]LkRk|≤4/2k.Inthiscase, it is elementary to verify that2Uq(B)p0(B)82kby rewriting 2Uq(B)p0(B)as the sum2Uq(B)U[k]Lk+Rkp0(B)+U[k]LkRkand invoking the triangle inequality on the fact that theabsolute value of the three terms above is upper-boundedby 3/2k,1/2kand 4/2k, respectively. Therefore, given anyB,usingδBto denote1282k1q(B),wehaveP{K>k|B}≤P{|2Uq(B)p0(B)|≤8/2k|B}=P{U(p0(B)2q(B)δB,p0(B)2q(B)+δB)}2δB=82k1q(B).Thus, usingk0to denote 3+ log2(1/q(B)),E{K|B}=k=0P{K>k|B}k=0min(1,82kq(B))k<k01+kk082kq(B)5+log2(1q(B)).The last step uses the fact thatx+21x2forall0x<1,wherex= log2(1/q(B))�−log2(1/q(B)).Finally, we uncondition in order to conclude:E{K}≤5+b∈{−1,+1}nq(b)log2(1q(b))=H(q)+5(5)n+5,whereH(q)denotes the Shannon entropy ofq=q0+q12