PNG  IHDRQgAMA a cHRMz&u0`:pQ<bKGDgmIDATxwUﹻ& ^CX(J I@ "% (** BX +*i"]j(IH{~R)[~>h{}gy)I$Ij .I$I$ʊy@}x.: $I$Ii}VZPC)I$IF ^0ʐJ$I$Q^}{"r=OzI$gRZeC.IOvH eKX $IMpxsk.쒷/&r[޳<v| .I~)@$updYRa$I |M.e JaֶpSYR6j>h%IRز if&uJ)M$I vLi=H;7UJ,],X$I1AҒJ$ XY XzI@GNҥRT)E@;]K*Mw;#5_wOn~\ DC&$(A5 RRFkvIR}l!RytRl;~^ǷJj اy뷦BZJr&ӥ8Pjw~vnv X^(I;4R=P[3]J,]ȏ~:3?[ a&e)`e*P[4]T=Cq6R[ ~ޤrXR Հg(t_HZ-Hg M$ãmL5R uk*`%C-E6/%[t X.{8P9Z.vkXŐKjgKZHg(aK9ڦmKjѺm_ \#$5,)-  61eJ,5m| r'= &ڡd%-]J on Xm|{ RҞe $eڧY XYrԮ-a7RK6h>n$5AVڴi*ֆK)mѦtmr1p| q:흺,)Oi*ֺK)ܬ֦K-5r3>0ԔHjJئEZj,%re~/z%jVMڸmrt)3]J,T K֦OvԒgii*bKiNO~%PW0=dii2tJ9Jݕ{7"I P9JKTbu,%r"6RKU}Ij2HKZXJ,妝 XYrP ެ24c%i^IK|.H,%rb:XRl1X4Pe/`x&P8Pj28Mzsx2r\zRPz4J}yP[g=L) .Q[6RjWgp FIH*-`IMRaK9TXcq*I y[jE>cw%gLRԕiFCj-ďa`#e~I j,%r,)?[gp FI˨mnWX#>mʔ XA DZf9,nKҲzIZXJ,L#kiPz4JZF,I,`61%2s $,VOϚ2/UFJfy7K> X+6 STXIeJILzMfKm LRaK9%|4p9LwJI!`NsiazĔ)%- XMq>pk$-$Q2x#N ؎-QR}ᶦHZډ)J,l#i@yn3LN`;nڔ XuX5pF)m|^0(>BHF9(cզEerJI rg7 4I@z0\JIi䵙RR0s;$s6eJ,`n 䂦0a)S)A 1eJ,堌#635RIgpNHuTH_SԕqVe ` &S)>p;S$魁eKIuX`I4춒o}`m$1":PI<[v9^\pTJjriRŭ P{#{R2,`)e-`mgj~1ϣLKam7&U\j/3mJ,`F;M'䱀 .KR#)yhTq;pcK9(q!w?uRR,n.yw*UXj#\]ɱ(qv2=RqfB#iJmmL<]Y͙#$5 uTU7ӦXR+q,`I}qL'`6Kͷ6r,]0S$- [RKR3oiRE|nӦXR.(i:LDLTJjY%o:)6rxzҒqTJjh㞦I.$YR.ʼnGZ\ֿf:%55 I˼!6dKxm4E"mG_ s? .e*?LRfK9%q#uh$)i3ULRfK9yxm܌bj84$i1U^@Wbm4uJ,ҪA>_Ij?1v32[gLRD96oTaR׿N7%L2 NT,`)7&ƝL*꽙yp_$M2#AS,`)7$rkTA29_Iye"|/0t)$n XT2`YJ;6Jx".e<`$) PI$5V4]29SRI>~=@j]lp2`K9Jaai^" Ԋ29ORI%:XV5]JmN9]H;1UC39NI%Xe78t)a;Oi Ҙ>Xt"~G>_mn:%|~ޅ_+]$o)@ǀ{hgN;IK6G&rp)T2i୦KJuv*T=TOSV>(~D>dm,I*Ɛ:R#ۙNI%D>G.n$o;+#RR!.eU˽TRI28t)1LWϚ>IJa3oFbu&:tJ*(F7y0ZR ^p'Ii L24x| XRI%ۄ>S1]Jy[zL$adB7.eh4%%누>WETf+3IR:I3Xה)3אOۦSRO'ٺ)S}"qOr[B7ϙ.edG)^ETR"RtRݜh0}LFVӦDB^k_JDj\=LS(Iv─aTeZ%eUAM-0;~˃@i|l @S4y72>sX-vA}ϛBI!ݎߨWl*)3{'Y|iSlEڻ(5KtSI$Uv02,~ԩ~x;P4ցCrO%tyn425:KMlD ^4JRxSهF_}شJTS6uj+ﷸk$eZO%G*^V2u3EMj3k%)okI]dT)URKDS 7~m@TJR~荪fT"֛L \sM -0T KfJz+nإKr L&j()[E&I ߴ>e FW_kJR|!O:5/2跌3T-'|zX ryp0JS ~^F>-2< `*%ZFP)bSn"L :)+pʷf(pO3TMW$~>@~ū:TAIsV1}S2<%ޟM?@iT ,Eūoz%i~g|`wS(]oȤ8)$ ntu`өe`6yPl IzMI{ʣzʨ )IZ2= ld:5+請M$-ї;U>_gsY$ÁN5WzWfIZ)-yuXIfp~S*IZdt;t>KūKR|$#LcԀ+2\;kJ`]YǔM1B)UbG"IRߊ<xܾӔJ0Z='Y嵤 Leveg)$znV-º^3Ւof#0Tfk^Zs[*I꯳3{)ˬW4Ւ4 OdpbZRS|*I 55#"&-IvT&/윚Ye:i$ 9{LkuRe[I~_\ؠ%>GL$iY8 9ܕ"S`kS.IlC;Ҏ4x&>u_0JLr<J2(^$5L s=MgV ~,Iju> 7r2)^=G$1:3G< `J3~&IR% 6Tx/rIj3O< ʔ&#f_yXJiގNSz; Tx(i8%#4 ~AS+IjerIUrIj362v885+IjAhK__5X%nV%Iͳ-y|7XV2v4fzo_68"S/I-qbf; LkF)KSM$ Ms>K WNV}^`-큧32ŒVؙGdu,^^m%6~Nn&͓3ŒVZMsRpfEW%IwdǀLm[7W&bIRL@Q|)* i ImsIMmKmyV`i$G+R 0tV'!V)֏28vU7͒vHꦼtxꗞT ;S}7Mf+fIRHNZUkUx5SAJㄌ9MqμAIRi|j5)o*^'<$TwI1hEU^c_j?Е$%d`z cyf,XO IJnTgA UXRD }{H}^S,P5V2\Xx`pZ|Yk:$e ~ @nWL.j+ϝYb퇪bZ BVu)u/IJ_ 1[p.p60bC >|X91P:N\!5qUB}5a5ja `ubcVxYt1N0Zzl4]7­gKj]?4ϻ *[bg$)+À*x쳀ogO$~,5 زUS9 lq3+5mgw@np1sso Ӻ=|N6 /g(Wv7U;zωM=wk,0uTg_`_P`uz?2yI!b`kĸSo+Qx%!\οe|އԁKS-s6pu_(ֿ$i++T8=eY; צP+phxWQv*|p1. ά. XRkIQYP,drZ | B%wP|S5`~́@i޾ E;Չaw{o'Q?%iL{u D?N1BD!owPHReFZ* k_-~{E9b-~P`fE{AܶBJAFO wx6Rox5 K5=WwehS8 (JClJ~ p+Fi;ŗo+:bD#g(C"wA^ r.F8L;dzdIHUX݆ϞXg )IFqem%I4dj&ppT{'{HOx( Rk6^C٫O.)3:s(۳(Z?~ٻ89zmT"PLtw䥈5&b<8GZ-Y&K?e8,`I6e(֍xb83 `rzXj)F=l($Ij 2*(F?h(/9ik:I`m#p3MgLaKjc/U#n5S# m(^)=y=đx8ŬI[U]~SцA4p$-F i(R,7Cx;X=cI>{Km\ o(Tv2vx2qiiDJN,Ҏ!1f 5quBj1!8 rDFd(!WQl,gSkL1Bxg''՞^ǘ;pQ P(c_ IRujg(Wz bs#P­rz> k c&nB=q+ؔXn#r5)co*Ũ+G?7< |PQӣ'G`uOd>%Mctz# Ԫڞ&7CaQ~N'-P.W`Oedp03C!IZcIAMPUۀ5J<\u~+{9(FbbyAeBhOSܳ1 bÈT#ŠyDžs,`5}DC-`̞%r&ڙa87QWWp6e7 Rϫ/oY ꇅ Nܶըtc!LA T7V4Jsū I-0Pxz7QNF_iZgúWkG83 0eWr9 X]㾮݁#Jˢ C}0=3ݱtBi]_ &{{[/o[~ \q鯜00٩|cD3=4B_b RYb$óBRsf&lLX#M*C_L܄:gx)WΘsGSbuL rF$9';\4Ɍq'n[%p.Q`u hNb`eCQyQ|l_C>Lb꟟3hSb #xNxSs^ 88|Mz)}:](vbۢamŖ࿥ 0)Q7@0=?^k(*J}3ibkFn HjB׻NO z x}7p 0tfDX.lwgȔhԾŲ }6g E |LkLZteu+=q\Iv0쮑)QٵpH8/2?Σo>Jvppho~f>%bMM}\//":PTc(v9v!gոQ )UfVG+! 35{=x\2+ki,y$~A1iC6#)vC5^>+gǵ@1Hy٪7u;p psϰu/S <aʸGu'tD1ԝI<pg|6j'p:tպhX{o(7v],*}6a_ wXRk,O]Lܳ~Vo45rp"N5k;m{rZbΦ${#)`(Ŵg,;j%6j.pyYT?}-kBDc3qA`NWQū20/^AZW%NQ MI.X#P#,^Ebc&?XR tAV|Y.1!؅⨉ccww>ivl(JT~ u`ٵDm q)+Ri x/x8cyFO!/*!/&,7<.N,YDŽ&ܑQF1Bz)FPʛ?5d 6`kQձ λc؎%582Y&nD_$Je4>a?! ͨ|ȎWZSsv8 j(I&yj Jb5m?HWp=g}G3#|I,5v珿] H~R3@B[☉9Ox~oMy=J;xUVoj bUsl_35t-(ՃɼRB7U!qc+x4H_Qo֮$[GO<4`&č\GOc[.[*Af%mG/ ňM/r W/Nw~B1U3J?P&Y )`ѓZ1p]^l“W#)lWZilUQu`-m|xĐ,_ƪ|9i:_{*(3Gѧ}UoD+>m_?VPۅ15&}2|/pIOʵ> GZ9cmíتmnz)yߐbD >e}:) r|@R5qVSA10C%E_'^8cR7O;6[eKePGϦX7jb}OTGO^jn*媓7nGMC t,k31Rb (vyܴʭ!iTh8~ZYZp(qsRL ?b}cŨʊGO^!rPJO15MJ[c&~Z`"ѓޔH1C&^|Ш|rʼ,AwĴ?b5)tLU)F| &g٣O]oqSUjy(x<Ϳ3 .FSkoYg2 \_#wj{u'rQ>o;%n|F*O_L"e9umDds?.fuuQbIWz |4\0 sb;OvxOSs; G%T4gFRurj(֍ڑb uԖKDu1MK{1^ q; C=6\8FR艇!%\YÔU| 88m)֓NcLve C6z;o&X x59:q61Z(T7>C?gcļxѐ Z oo-08jہ x,`' ҔOcRlf~`jj".Nv+sM_]Zk g( UOPyεx%pUh2(@il0ݽQXxppx-NS( WO+轾 nFߢ3M<;z)FBZjciu/QoF 7R¥ ZFLF~#ȣߨ^<쩡ݛкvџ))ME>ώx4m#!-m!L;vv#~Y[đKmx9.[,UFS CVkZ +ߟrY٧IZd/ioi$%͝ب_ֶX3ܫhNU ZZgk=]=bbJS[wjU()*I =ώ:}-蹞lUj:1}MWm=̛ _ ¾,8{__m{_PVK^n3esw5ӫh#$-q=A̟> ,^I}P^J$qY~Q[ Xq9{#&T.^GVj__RKpn,b=`żY@^՝;z{paVKkQXj/)y TIc&F;FBG7wg ZZDG!x r_tƢ!}i/V=M/#nB8 XxЫ ^@CR<{䤭YCN)eKOSƟa $&g[i3.C6xrOc8TI;o hH6P&L{@q6[ Gzp^71j(l`J}]e6X☉#͕ ׈$AB1Vjh㭦IRsqFBjwQ_7Xk>y"N=MB0 ,C #o6MRc0|$)ف"1!ixY<B9mx `,tA>)5ػQ?jQ?cn>YZe Tisvh# GMމȇp:ԴVuږ8ɼH]C.5C!UV;F`mbBk LTMvPʍϤj?ԯ/Qr1NB`9s"s TYsz &9S%U԰> {<ؿSMxB|H\3@!U| k']$U+> |HHMLޢ?V9iD!-@x TIî%6Z*9X@HMW#?nN ,oe6?tQwڱ.]-y':mW0#!J82qFjH -`ѓ&M0u Uγmxϵ^-_\])@0Rt.8/?ٰCY]x}=sD3ojަЫNuS%U}ԤwHH>ڗjܷ_3gN q7[q2la*ArǓԖ+p8/RGM ]jacd(JhWko6ڎbj]i5Bj3+3!\j1UZLsLTv8HHmup<>gKMJj0@H%,W΃7R) ">c, xixј^ aܖ>H[i.UIHc U1=yW\=S*GR~)AF=`&2h`DzT󑓶J+?W+}C%P:|0H܆}-<;OC[~o.$~i}~HQ TvXΈr=b}$vizL4:ȰT|4~*!oXQR6Lk+#t/g lԁߖ[Jڶ_N$k*". xsxX7jRVbAAʯKҎU3)zSNN _'s?f)6X!%ssAkʱ>qƷb hg %n ~p1REGMHH=BJiy[<5 ǁJҖgKR*倳e~HUy)Ag,K)`Vw6bRR:qL#\rclK/$sh*$ 6덤 KԖc 3Z9=Ɣ=o>X Ώ"1 )a`SJJ6k(<c e{%kϊP+SL'TcMJWRm ŏ"w)qc ef꒵i?b7b('"2r%~HUS1\<(`1Wx9=8HY9m:X18bgD1u ~|H;K-Uep,, C1 RV.MR5άh,tWO8WC$ XRVsQS]3GJ|12 [vM :k#~tH30Rf-HYݺ-`I9%lIDTm\ S{]9gOڒMNCV\G*2JRŨ;Rҏ^ڽ̱mq1Eu?To3I)y^#jJw^Ńj^vvlB_⋌P4x>0$c>K†Aļ9s_VjTt0l#m>E-,,x,-W)سo&96RE XR.6bXw+)GAEvL)͞K4$p=Ũi_ѱOjb HY/+@θH9޼]Nԥ%n{ &zjT? Ty) s^ULlb,PiTf^<À] 62R^V7)S!nllS6~͝V}-=%* ʻ>G DnK<y&>LPy7'r=Hj 9V`[c"*^8HpcO8bnU`4JȪAƋ#1_\ XϘHPRgik(~G~0DAA_2p|J묭a2\NCr]M_0 ^T%e#vD^%xy-n}-E\3aS%yN!r_{ )sAw ڼp1pEAk~v<:`'ӭ^5 ArXOI驻T (dk)_\ PuA*BY]yB"l\ey hH*tbK)3 IKZ򹞋XjN n *n>k]X_d!ryBH ]*R 0(#'7 %es9??ښFC,ՁQPjARJ\Ρw K#jahgw;2$l*) %Xq5!U᢯6Re] |0[__64ch&_}iL8KEgҎ7 M/\`|.p,~`a=BR?xܐrQ8K XR2M8f ?`sgWS%" Ԉ 7R%$ N}?QL1|-эټwIZ%pvL3Hk>,ImgW7{E xPHx73RA @RS CC !\ȟ5IXR^ZxHл$Q[ŝ40 (>+ _C >BRt<,TrT {O/H+˟Pl6 I B)/VC<6a2~(XwV4gnXR ϱ5ǀHٻ?tw똤Eyxp{#WK qG%5],(0ӈH HZ])ג=K1j&G(FbM@)%I` XRg ʔ KZG(vP,<`[ Kn^ SJRsAʠ5xՅF`0&RbV tx:EaUE/{fi2;.IAwW8/tTxAGOoN?G}l L(n`Zv?pB8K_gI+ܗ #i?ޙ.) p$utc ~DžfՈEo3l/)I-U?aԅ^jxArA ΧX}DmZ@QLےbTXGd.^|xKHR{|ΕW_h] IJ`[G9{).y) 0X YA1]qp?p_k+J*Y@HI>^?gt.06Rn ,` ?);p pSF9ZXLBJPWjgQ|&)7! HjQt<| ؅W5 x W HIzYoVMGP Hjn`+\(dNW)F+IrS[|/a`K|ͻ0Hj{R,Q=\ (F}\WR)AgSG`IsnAR=|8$}G(vC$)s FBJ?]_u XRvύ6z ŨG[36-T9HzpW̞ú Xg큽=7CufzI$)ki^qk-) 0H*N` QZkk]/tnnsI^Gu't=7$ Z;{8^jB% IItRQS7[ϭ3 $_OQJ`7!]W"W,)Iy W AJA;KWG`IY{8k$I$^%9.^(`N|LJ%@$I}ֽp=FB*xN=gI?Q{٥4B)mw $Igc~dZ@G9K X?7)aK%݅K$IZ-`IpC U6$I\0>!9k} Xa IIS0H$I H ?1R.Чj:4~Rw@p$IrA*u}WjWFPJ$I➓/6#! LӾ+ X36x8J |+L;v$Io4301R20M I$-E}@,pS^ޟR[/s¹'0H$IKyfŸfVOπFT*a$I>He~VY/3R/)>d$I>28`Cjw,n@FU*9ttf$I~<;=/4RD~@ X-ѕzἱI$: ԍR a@b X{+Qxuq$IЛzo /~3\8ڒ4BN7$IҀj V]n18H$IYFBj3̵̚ja pp $Is/3R Ӻ-Yj+L;.0ŔI$Av? #!5"aʄj}UKmɽH$IjCYs?h$IDl843.v}m7UiI=&=0Lg0$I4: embe` eQbm0u? $IT!Sƍ'-sv)s#C0:XB2a w I$zbww{."pPzO =Ɔ\[ o($Iaw]`E).Kvi:L*#gР7[$IyGPI=@R 4yR~̮´cg I$I/<tPͽ hDgo 94Z^k盇΄8I56^W$I^0̜N?4*H`237}g+hxoq)SJ@p|` $I%>-hO0eO>\ԣNߌZD6R=K ~n($I$y3D>o4b#px2$yڪtzW~a $I~?x'BwwpH$IZݑnC㧄Pc_9sO gwJ=l1:mKB>Ab<4Lp$Ib o1ZQ@85b̍ S'F,Fe,^I$IjEdù{l4 8Ys_s Z8.x m"+{~?q,Z D!I$ϻ'|XhB)=…']M>5 rgotԎ 獽PH$IjIPhh)n#cÔqA'ug5qwU&rF|1E%I$%]!'3AFD/;Ck_`9 v!ٴtPV;x`'*bQa w I$Ix5 FC3D_~A_#O݆DvV?<qw+I$I{=Z8".#RIYyjǪ=fDl9%M,a8$I$Ywi[7ݍFe$s1ՋBVA?`]#!oz4zjLJo8$I$%@3jAa4(o ;p,,dya=F9ً[LSPH$IJYЉ+3> 5"39aZ<ñh!{TpBGkj}Sp $IlvF.F$I z< '\K*qq.f<2Y!S"-\I$IYwčjF$ w9 \ߪB.1v!Ʊ?+r:^!I$BϹB H"B;L'G[ 4U#5>੐)|#o0aڱ$I>}k&1`U#V?YsV x>{t1[I~D&(I$I/{H0fw"q"y%4 IXyE~M3 8XψL}qE$I[> nD?~sf ]o΁ cT6"?'_Ἣ $I>~.f|'!N?⟩0G KkXZE]ޡ;/&?k OۘH$IRۀwXӨ<7@PnS04aӶp.:@\IWQJ6sS%I$e5ڑv`3:x';wq_vpgHyXZ 3gЂ7{{EuԹn±}$I$8t;b|591nءQ"P6O5i }iR̈́%Q̄p!I䮢]O{H$IRϻ9s֧ a=`- aB\X0"+5"C1Hb?߮3x3&gşggl_hZ^,`5?ߎvĸ%̀M!OZC2#0x LJ0 Gw$I$I}<{Eb+y;iI,`ܚF:5ܛA8-O-|8K7s|#Z8a&><a&/VtbtLʌI$I$I$I$I$I$IRjDD%tEXtdate:create2022-05-31T04:40:26+00:00!Î%tEXtdate:modify2022-05-31T04:40:26+00:00|{2IENDB`Mini Shell

HOME


Mini Shell 1.0
DIR:/lib64/python2.7/site-packages/lxml/html/
Upload File :
Current File : //lib64/python2.7/site-packages/lxml/html/diff.pyc
�
���Pc@sWddlZddlmZddlmZddlZddgZyddlmZ	Wn!e
k
r{ddlmZ	nXy
eZ
Wnek
r�eZ
nXyeWnek
r�eZnXd�Zed�Zd	�Zd
�Zd�Zd�Zd
�Zd�Zd�Zd�Zed�Zd�Zddkd��YZddld��YZde fd��YZ!d�Z"d�Z#d�Z$d�Z%d�Z&d�Z'de
fd ��YZ(d!e(fd"��YZ)d#e(fd$��YZ*e+d%�Z,e+d&�Z-ej.d'ej/ej0B�Z1ej.d(ej/ej0B�Z2ej.d)ej/ej0B�Z3d*�Z4ej.d+�Z5d,�Z6dmZ7dnZ8doZ9edY�Z:dZ�Z;ej.d[�Z<d\�Z=d]�Z>d^�Z?d_�Z@d`�ZAda�ZBedb�ZCdc�ZDdd�ZEde�ZFdf�ZGdgejHfdh��YZIeJdikrSddjlmKZKeKjL�ndS(pi����N(tetree(tfragment_fromstringt
html_annotatethtmldiff(tescapecCsdtt|�d�|fS(Ns<span title="%s">%s</span>i(thtml_escapet_unicode(ttexttversion((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pytdefault_markupscCs�g|D]\}}t||�^q}|d}x%|dD]}t||�|}q=Wt|�}t||�}dj|�j�S(s
    doclist should be ordered from oldest to newest, like::

        >>> version1 = 'Hello World'
        >>> version2 = 'Goodbye World'
        >>> print(html_annotate([(version1, 'version 1'),
        ...                      (version2, 'version 2')]))
        <span title="version 2">Goodbye</span> <span title="version 1">World</span>

    The documents must be *fragments* (str/UTF8 or unicode), not
    complete documents

    The markup argument is a function to markup the spans of words.
    This function is called like markup('Hello', 'version 2'), and
    returns HTML.  The first argument is text and never includes any
    markup.  The default uses a span with a title:

        >>> print(default_markup('Some Text', 'by Joe'))
        <span title="by Joe">Some Text</span>
    iit(ttokenize_annotatedthtml_annotate_merge_annotationstcompress_tokenstmarkup_serialize_tokenstjointstrip(tdoclisttmarkuptdocRt	tokenlistt
cur_tokensttokenstresult((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRs%


cCs0t|dt�}x|D]}||_qW|S(sFTokenize a document and add an annotation attribute to each token
    t
include_hrefs(ttokenizetFalset
annotation(RRRttok((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRGs

cCs{td|d|�}|j�}xS|D]K\}}}}}|dkr(|||!}	|||!}
t|	|
�q(q(WdS(s�Merge the annotations from tokens_old into tokens_new, when the
    tokens in the new document already existed in the old document.
    tatbtequalN(tInsensitiveSequenceMatchertget_opcodestcopy_annotations(t
tokens_oldt
tokens_newtstcommandstcommandti1ti2tj1tj2teq_oldteq_new((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyROs

cCsNt|�t|�kst�x)t||�D]\}}|j|_q.WdS(sN
    Copy annotations from the tokens listed in src to the tokens in dest
    N(tlentAssertionErrortzipR(tsrctdesttsrc_toktdest_tok((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR"\scCsq|dg}x]|dD]Q}|djr\|jr\|dj|jkr\t||�q|j|�qW|S(sm
    Combine adjacent tokens when there is no HTML between the tokens, 
    and they share an annotation
    iii����(t	post_tagstpre_tagsRtcompress_merge_backtappend(RRR((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR
ds

c	Cs�|d}t|�tk	s.t|�tk	r>|j|�nit|�}|jr`|d7}n||7}t|d|jd|jd|j�}|j|_||d<dS(sY Merge tok into the last element of tokens (modifying the list of
    tokens in-place).  i����t R6R5ttrailing_whitespaceN(ttypettokenR8RR:R6R5R(RRtlastRtmerged((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR7ss
$	

			ccs�xy|D]q}x|jD]}|VqW|j�}|||j�}|jrZ|d7}n|Vx|jD]}|VqiWqWdS(sz
    Serialize the list of tokens into a list of text chunks, calling
    markup_func around text to add annotations.
    R9N(R6thtmlRR:R5(Rtmarkup_funcR<tpreR?tpost((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�s
		
cCsFt|�}t|�}t||�}dj|�j�}t|�S(s� Do a diff of the old and new document.  The documents are HTML
    *fragments* (str/UTF8 or unicode), they are not complete documents
    (i.e., no <html> tag).

    Returns HTML with <ins> and <del> tags added around the
    appropriate text.  

    Markup is generally ignored, with the markup from new_html
    preserved, and possibly some markup from old_html (though it is
    considered acceptable to lose some of the old markup).  Only the
    words in the HTML are diffed.  The exception is <img> tags, which
    are treated like words, and the href attribute of <a> tags, which
    are noted inside the tag itself when there are changes.
    R
(Rthtmldiff_tokensRRtfixup_ins_del_tags(told_htmltnew_htmltold_html_tokenstnew_html_tokensR((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�s
cCs�td|d|�}|j�}g}x�|D]�\}}}}}	|dkru|jt|||	!dt��q.n|dks�|dkr�t|||	!�}
t|
|�n|dks�|dkr.t|||!�}t||�q.q.Wt|�}|S(s] Does a diff on the tokens themselves, returning a list of text
    chunks (not tokens).
    RRRtinserttreplacetdelete(R R!textendt
expand_tokenstTruetmerge_inserttmerge_deletetcleanup_delete(thtml1_tokensthtml2_tokensR%R&RR'R(R)R*R+t
ins_tokenst
del_tokens((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRC�s ccs�xz|D]r}x|jD]}|VqW|s7|jr`|jrR|j�dVq`|j�Vnx|jD]}|VqjWqWdS(seGiven a list of tokens, return a generator of the chunks of
    text for the data in the tokens.
    R9N(R6thide_when_equalR:R?R5(RRR<RARB((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRM�s
		cCs�t|�\}}}|j|�|rO|djd�rO|dcd7<n|jd�|r�|djd�r�|dd |d<n|j|�|jd�|j|�dS(s| doc is the already-handled document (as a list of text chunks);
    here we add <ins>ins_chunks</ins> to the end of that.  i����R9s<ins>s</ins> N(tsplit_unbalancedRLtendswithR8(t
ins_chunksRtunbalanced_starttbalancedtunbalanced_end((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRO�s



t	DEL_STARTcBseZRS((t__name__t
__module__(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR]�stDEL_ENDcBseZRS((R^R_(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR`�st	NoDeletescBseZdZRS(sY Raised when the document no longer contains any pending deletes
    (DEL_START/DEL_END) (R^R_t__doc__(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRascCs+|jt�|j|�|jt�dS(s� Adds the text chunks in del_chunks to the document doc (another
    list of text chunks) with marker to show it is a delete.
    cleanup_delete later resolves these markers into <del> tags.N(R8R]RLR`(t
del_chunksR((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRPs

cCsxyt|�\}}}Wntk
r0PnXt|�\}}}t|||�t|||�|}|r�|djd�r�|dcd7<n|jd�|r�|djd�r�|dd |d<n|j|�|jd�|j|�|}q|S(s� Cleans up any DEL_START/DEL_END markers in the document, replacing
    them with <del></del>.  To do this while keeping the document
    valid, it may need to drop some tags (either start or end tags).

    It may also move the del into adjacent tags to try to move it to a
    similar location where it was originally located (e.g., moving a
    delete into preceding <div> tag, if the del looks like (DEL_START,
    'Text</div>', DEL_END)i����R9s<del>s</del> (tsplit_deleteRaRWtlocate_unbalanced_starttlocate_unbalanced_endRXR8RL(tchunkst
pre_deleteRKtpost_deleteRZR[R\R((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRQs&	




	c
Cs�g}g}g}g}xE|D]=}|jd�sG|j|�qn|ddk}|j�djd�}|tkr�|j|�qn|r3|r�|dd|kr�|j|�|j�\}}}	|	||<q\|r#|jg|D]\}}}	|	^q��g}|j|�q\|j|�q|j|t|�|f�|jd�qW|jg|D]\}}}|^qm�g|D]}|dk	r�|^q�}|||fS(s]Return (unbalanced_start, balanced, unbalanced_end), where each is
    a list of text and tag chunks.

    unbalanced_start is a list of all the tags that are opened, but
    not closed in this span.  Similarly, unbalanced_end is a list of
    tags that are closed but were not opened.  Extracting these might
    mean some reordering of the chunks.t<it/is<>/i����N(	t
startswithR8tsplitRt
empty_tagstpopRLR.tNone(
Rgtstarttendt	tag_stackR[tchunktendtagtnametposttag((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRW5s:




)#%cCs`y|jt�}Wntk
r,t�nX|jt�}|| ||d|!||dfS(s� Returns (stuff_before_DEL_START, stuff_inside_DEL_START_END,
    stuff_after_DEL_END).  Returns the first case found (there may be
    more DEL_STARTs in stuff_after_DEL_END).  Raises NoDeletes if
    there's no DEL_START found. i(tindexR]t
ValueErrorRaR`(RgRwtpos2((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRd]s

cCs�x�|s
Pn|d}|j�djd�}|s:Pn|d}|tks`|jd�rdPn|ddkrxPn|j�djd�}|dkr�Pn|dks�td|��||kr�|jd�|j|jd��qPqd	S(
s� pre_delete and post_delete implicitly point to a place in the
    document (where the two were split).  This moves that point (by
    popping items from one and pushing them onto the other).  It moves
    the point to try to find a place where unbalanced_start applies.

    As an example::

        >>> unbalanced_start = ['<div>']
        >>> doc = ['<p>', 'Text', '</p>', '<div>', 'More Text', '</div>']
        >>> pre, post = doc[:3], doc[3:]
        >>> pre, post
        (['<p>', 'Text', '</p>'], ['<div>', 'More Text', '</div>'])
        >>> locate_unbalanced_start(unbalanced_start, pre, post)
        >>> pre, post
        (['<p>', 'Text', '</p>', '<div>'], ['More Text', '</div>'])

    As you can see, we moved the point so that the dangling <div> that
    we found will be effectively replaced by the div in the original
    document.  If this doesn't work out, we just throw away
    unbalanced_start without doing anything.
    is<>RjiRktinstdelsUnexpected delete tag: %rN(RmRR]RlR/RoR8(RZRhRitfindingtfinding_nametnextRv((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyReis*



cCs�x�|s
Pn|d}|j�djd�}|s:Pn|d}|tks`|jd�rdPn|j�djd�}|dks�|dkr�Pn||kr�|j�|jd|j��qPqdS(st like locate_unbalanced_start, except handling end tags and
    possibly moving the point earlier in the document.  i����is<>/s</R|R}N(RmRR`RlRoRI(R\RhRiR~RR�Rv((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRf�s"


R<cBs8eZdZeZdded�Zd�Zd�ZRS(s8 Represents a diffable token, generally a word that is displayed to
    the user.  Opening tags are attached to this token when they are
    adjacent (pre_tags) and closing tags that follow the word
    (post_tags).  Some exceptions occur when there are empty tags
    adjacent to a word, so there may be close tags in pre_tags, or
    open tags in post_tags.

    We also keep track of whether the word was originally followed by
    whitespace, even though we do not want to treat the word as
    equivalent to a similar word that does not have a trailing
    space.cCsatj||�}|dk	r*||_n	g|_|dk	rK||_n	g|_||_|S(N(Rt__new__RpR6R5R:(tclsRR6R5R:tobj((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s			cCs dtj|�|j|jfS(Nstoken(%s, %r, %r)(Rt__repr__R6R5(tself((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs
t|�S(N(R(R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR?�sN(	R^R_RbRRVRpR�R�R?(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR<�s
	t	tag_tokencBs2eZdZdded�Zd�Zd�ZRS(s� Represents a token that is actually a tag.  Currently this is just
    the <img> tag, which takes up visible space just like a word but
    is only represented in a document by a tag.  c	CsMtj|dt|fd|d|d|�}||_||_||_|S(Ns%s: %sR6R5R:(R<R�R;Rxtdatat	html_repr(R�RxR�R�R6R5R:R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s				cCs,d|j|j|j|j|j|jfS(NsRtag_token(%s, %s, html_repr=%s, post_tags=%r, pre_tags=%r, trailing_whitespace=%s)(RxR�R�R6R5R:(R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs|jS(N(R�(R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR?�sN(R^R_RbRpRR�R�R?(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s

	t
href_tokencBseZdZeZd�ZRS(sh Represents the href in an anchor tag.  Unlike other words, we only
    show the href when it changes.  cCsd|S(Ns	 Link: %s((R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR?s(R^R_RbRNRVR?(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCsLtj|�r|}nt|dt�}t|dtd|�}t|�S(sk
    Parse the given HTML and returns token objects (words with attached tags).

    This parses only the content of a page; anything in the head is
    ignored, and the <head> and <body> elements are themselves
    optional.  The content is then parsed by lxml, which ensures the
    validity of the resulting parsed document (though lxml may make
    incorrect guesses when the markup is particular bad).

    <ins> and <del> tags are also eliminated from the document, as
    that gets confusing.

    If include_hrefs is true, then the href attribute of <a> tags is
    included as a special kind of diffable token.tcleanuptskip_tagR(Rt	iselementt
parse_htmlRNt
flatten_eltfixup_chunks(R?Rtbody_elRg((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRs
	cCs%|rt|�}nt|dt�S(s
    Parses an HTML fragment, returning an lxml element.  Note that the HTML will be
    wrapped in a <div> tag that was not in the original document.

    If cleanup is true, make sure there's no <head> or <body>, and get
    rid of any <ins> and <del> tags.
    t
create_parent(tcleanup_htmlRRN(R?R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�ss	<body.*?>s
</body.*?>s</?(ins|del).*?>cCsftj|�}|r(||j�}ntj|�}|rP||j� }ntjd|�}|S(s� This 'cleans' the HTML, meaning that any page structure is removed
    (only the contents of <body> are used, if there is any <body).
    Also <ins> and <del> tags are removed.  R
(t_body_retsearchRrt_end_body_reRqt_ins_del_retsub(R?tmatch((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�,ss
[ \t\n\r]$c	
Csg}d
}g}x�|D]�}t|t�r�|ddkr�|d}|d}|jd�rt|d }t}nt}td|d|d|d	|�}g}|j|�q|dd
kr|d}t|d|d	t�}g}|j|�qqnt	|�ra|jd�r-|d }t}nt}t
|d|d	|�}g}|j|�qt|�r}|j|�qt|�r�|r�|j|�q�|s�t
d||||f��|jj|�qdst
�qW|s�t
dd|�gS|djj|�|S(sM
    This function takes a list of chunks and produces a list of tokens.
    itimgiiR9i����R�R6R:threfs4Weird state, cur_word=%r, result=%r, chunks=%r of %rR
N(Rpt
isinstancettupleRXRNRR�R8R�tis_wordR<tis_start_tagt
is_end_tagR/R5RL(	Rgt	tag_accumtcur_wordRRtR1RxR:R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�<sZ



		

		tparamR�tareatbrtbasefonttinputtbasetmetatlinktcoltaddresst
blockquotetcentertdirtdivtdltfieldsettformth1th2th3th4th5th6thrtisindextmenutnoframestnoscripttoltpRAttabletultddtdttframesettlittbodyttdttfoottthttheadttrccsS|sC|jdkr5d|jd�t|�fVqCt|�Vn|jtkrw|jrwt|�rw|jrwdSt|j�}x|D]}t|�Vq�Wx0|D](}xt	|d|�D]}|Vq�Wq�W|jdkr|jd�r|rd|jd�fVn|sOt
|�Vt|j�}x|D]}t|�Vq7WndS(s Takes an lxml element el, and generates all the text chunks for
    that tag.  Each start tag is a chunk, each word is a chunk, and each
    end tag is a chunk.

    If skip_tag is true, then the outermost container tag is
    not returned (just its contents).R�R1NRRR�(Rxtgett	start_tagRnRR.ttailtsplit_wordsRR�tend_tag(telRR�tstart_wordstwordtchildtitemt	end_words((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s& 0


$
cCsi|s|j�rgSg|j�j�D]}|d^q+}tj|�se|dd |d<n|S(sk Splits some text into words. Includes trailing whitespace (one
    space) on each word when appropriate.  R9i����(RRmtend_whitespace_reR�(Rtwtwords((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s)s
^[ \t\n\r]c
CsOd|jdjg|jj�D]%\}}d|t|t�f^q�fS(s=
    The text representation of the start tag for a tag.
    s<%s%s>R
s %s="%s"(RxRtattribtitemsRRN(R�Rvtvalue((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs;|jr$tj|j�r$d}nd}d|j|fS(sg The text representation of an end tag for a tag.  Includes
    trailing whitespace when appropriate.  R9R
s</%s>%s(R�tstart_whitespace_reR�Rx(R�textra((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s	cCs|jd�S(NRj(Rl(R((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs
|jd�S(Ns</(Rl(R((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs|jd�o|jd�S(NRjs</(Rl(R((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��scCs2t|dt�}t|�t|dt�}|S(s� Given an html string, move any <ins> or <del> tags inside of any
    block-level elements, e.g. transform <ins><p>word</p></ins> to
    <p><ins>word</ins></p> R�t
skip_outer(R�Rt_fixup_ins_del_tagstserialize_html_fragmentRN(R?R((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyRD�s
cCs}t|t�s td|��tj|dddt�}|ru||jd�d}||jd� }|j�S|SdS(	s� Serialize a single lxml element as HTML.  The serialized form
    includes the elements tail.  

    If skip_outer is true, then don't serialize the outermost tag
    s3You should pass in an element, not a string like %rtmethodR?tencodingt>iRjN(	R�t
basestringR/RttostringRtfindtrfindR(R�R�R?((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR��s

cCsex^ddgD]P}xG|jd|�D]2}t|�s?q'nt|d|�|j�q'Wq
WdS(s?fixup_ins_del_tags that works on an lxml document in-place
    R|R}sdescendant-or-self::%sRxN(txpatht_contains_block_level_tagt_move_el_inside_blocktdrop_tag(RRxR�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�scCsG|jtks|jtkr"tSx|D]}t|�r)tSq)WtS(sPTrue if the element contains any block-level elements, like <p>, <td>, etc.
    (Rxtblock_level_tagstblock_level_container_tagsRNR�R(R�R�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�s
cCsbxo|D]}t|�rPqqWddl}tj|�}|j|_d|_|jt|��|g|(dSx�t|�D]�}t|�r�t||�|j	rtj|�}|j	|_d|_	|j
|j|�d|�qqtj|�}|j||�|j
|�qW|jr^tj|�}|j|_d|_|j
d|�ndS(st helper for _fixup_ins_del_tags; actually takes the <ins> etc tags
    and moves them inside any block-level tags.  i����Nii(R�tsysRtElementRRpRLtlistR�R�RIRyRJR8(R�RxR�R�tchildren_tagttail_tagt	child_tagttext_tag((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�s4
	

		#		cCs&|j�}|jpd}|jryt|�s@||j7}qy|djrf|dj|j7_qy|j|d_n|j|�}|r|dkr�d}n||d}|dkr�|jr�|j|7_q||_q|jr�|j|7_q||_n|j�|||d+dS(s�
    Removes an element, but merges its contents into its place, e.g.,
    given <p>Hi <i>there!</i></p>, if you remove the <i> element you get
    <p>Hi there!</p>
    R
i����iiN(t	getparentRR�R.RyRptgetchildren(R�tparentRRytprevious((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyt_merge_element_contents7s*	
			R cBseZdZdZd�ZRS(st
    Acts like SequenceMatcher, but tries not to find very small equal
    blocks amidst large spans of changes
    icCs{tt|j�t|j��}t|j|d�}tjj|�}g|D]'}|d|ksq|drP|^qPS(Nii(tminR.Rt	thresholdtdifflibtSequenceMatchertget_matching_blocks(R�tsizeR�tactualR�((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR�as!
(R^R_RbR�R�(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyR Yst__main__(t_diffcommand(((
R�simgR�R�R�sinputsbaseR�slinkR�(R�R�scentersdirR�R�R�R�R�R�R�R�R�R�R�R�R�R�R�R�R�sprestableR�(
R�R�R�R�R�R�R�R�R�R�(MR�tlxmlRt	lxml.htmlRtret__all__R?RRtImportErrortcgitunicodeRt	NameErrortstrR�R	RRRR"R
R7RRRCRRMROR]R`t	ExceptionRaRPRQRWRdReRfR<R�R�RNRR�tcompiletItSR�R�R�R�R�R�RnR�R�R�R�R�R�R�R�R�R�RDR�R�R�R�R�R�R R^Rtmain(((s4/usr/lib64/python2.7/site-packages/lxml/html/diff.pyt<module>s�





	(		
						&			'	(		2	(

		<	
										
	!	"