Read about our upcoming Code of Conduct on this issue

This instance was upgraded to Heptapod 0.28.0 today

pullbundle.py 21.9 KB
Newer Older
Pierre-Yves David's avatar
Pierre-Yves David committed
1
2
3
4
5
6
# Extension to provide automatic caching of bundle server for pull
#
# Copyright 2018 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
7
"""pullbundle: automatic server side bundle caching
Pierre-Yves David's avatar
Pierre-Yves David committed
8

9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
General principle
=================

This extension provides a means for server to use pre-computed bundle for
serving arbitrary pulls. If missing, the necessary pre-computed bundle will be
generated on demand.

To maximize usage of existing cached bundle, each pull will be served through
multiple bundles. The bundle will be created using "standard range" from the
"stablerange" principle. The "stablerange" concept if already used for
obsmarkers discovery in the evolve extensions.

Using pull Bundle
=================

All configuration is only required server side.

The "stablerange" code currently still live in the evolve extensions, so for
now enabling that extensions is required:

You need at minimum the following configuration:

    [extensions]
    evolve=yes
    pullbundle=yes
    [experimental]
    obshashrange.warm-cache = yes

If you do not want to use evolution server side, you should disable obsmarkers exchange:

    [experimental]
    evolution.exchange=no

42
43
44
45
46
47
48
49
Extra Configuration
===================

  [pullbundle]
  # By default bundles are stored `.hg/cache/pullbundles/.
  # This can be changed with the following config:
  cache-directory=/absolute/path

50
51
52
53
54
55
56
57
Implementation status
=====================

Both for stablerange and pullbundle use "simple" initial implementations.
Theses implemenations focus on testing the algorithms and proving the features
works. Yet they are already useful and used in production.

Performances are expected to greatly improved in the final implementation,
58
especially if some of it end up being compiled code.
59
60
61
62

This first implementation lacks the ability to server the cached bundle from a
CDN. We'll want this limitation to be lifted quickly.

63
64
65
66
The way mercurial core report progress is designed for the receival of a single
changegroup. So currently using pullbundle means flooding the user with output.
This will have to be fixed.

67
68
69
70
71
72
73
74
Why is does this live in the same repository as evolve
======================================================

There is no fundamental reasons for live in the same repository. However, the
stablerange data-structure lives in evolve, so it was simpler to put this new
extensions next to it. As soon as stable range have been upstreamed, we won't
need the dependency to the evolve extension anymore.
"""
75
76

import collections
77
import errno
78
import random
79
80
import os

Pierre-Yves David's avatar
Pierre-Yves David committed
81
82
from mercurial import (
    changegroup,
83
    discovery,
84
    error,
Pierre-Yves David's avatar
Pierre-Yves David committed
85
86
    exchange,
    narrowspec,
87
    node as nodemod,
88
    registrar,
89
    scmutil,
90
    ui as uimod,
91
    util,
Pierre-Yves David's avatar
Pierre-Yves David committed
92
93
94
95
)

from mercurial.i18n import _

96
97
98
99
__version__ = b'0.1.1'
testedwith = b'4.4 4.5 4.6 4.7.1'
minimumhgversion = b'4.4'
buglink = b'https://bz.mercurial-scm.org/'
100

101
102
103
cmdtable = {}
command = registrar.command(cmdtable)

104
105
106
configtable = {}
configitem = registrar.configitem(configtable)

107
configitem(b'pullbundle', b'cache-directory',
108
109
110
           default=None,
)

111
112
# generic wrapping

Pierre-Yves David's avatar
Pierre-Yves David committed
113
def uisetup(ui):
114
    exchange.getbundle2partsmapping[b'changegroup'] = _getbundlechangegrouppart
Pierre-Yves David's avatar
Pierre-Yves David committed
115
116
117
118
119
120
121

def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
                              b2caps=None, heads=None, common=None, **kwargs):
    """add a changegroup part to the requested bundle"""
    if not kwargs.get(r'cg', True):
        return

122
123
    version = b'01'
    cgversions = b2caps.get(b'changegroup')
Pierre-Yves David's avatar
Pierre-Yves David committed
124
125
126
127
    if cgversions:  # 3.1 and 3.2 ship with an empty value
        cgversions = [v for v in cgversions
                      if v in changegroup.supportedoutgoingversions(repo)]
        if not cgversions:
128
            raise ValueError(_(b'no common changegroup version'))
Pierre-Yves David's avatar
Pierre-Yves David committed
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
        version = max(cgversions)

    outgoing = exchange._computeoutgoing(repo, heads, common)
    if not outgoing.missing:
        return

    if kwargs.get(r'narrow', False):
        include = sorted(filter(bool, kwargs.get(r'includepats', [])))
        exclude = sorted(filter(bool, kwargs.get(r'excludepats', [])))
        filematcher = narrowspec.match(repo.root, include=include,
                                       exclude=exclude)
    else:
        filematcher = None

    # START OF ALTERED PART
    makeallcgpart(bundler.newpart, repo, outgoing, version, source, bundlecaps,
                  filematcher, cgversions)
    # END OF ALTERED PART

    if kwargs.get(r'narrow', False) and (include or exclude):
149
        narrowspecpart = bundler.newpart(b'narrow:spec')
Pierre-Yves David's avatar
Pierre-Yves David committed
150
151
        if include:
            narrowspecpart.addparam(
152
                b'include', b'\n'.join(include), mandatory=True)
Pierre-Yves David's avatar
Pierre-Yves David committed
153
154
        if exclude:
            narrowspecpart.addparam(
155
                b'exclude', b'\n'.join(exclude), mandatory=True)
Pierre-Yves David's avatar
Pierre-Yves David committed
156
157
158

def makeallcgpart(newpart, repo, outgoing, version, source,
                  bundlecaps, filematcher, cgversions):
159
160
161

    pullbundle = not filematcher
    if pullbundle and not util.safehasattr(repo, 'stablerange'):
162
        repo.ui.warn(b'pullbundle: required extension "evolve" are missing, skipping pullbundle\n')
163
164
165
166
167
        pullbundle = False
    if filematcher:
        makeonecgpart(newpart, repo, None, outgoing, version, source, bundlecaps,
                      filematcher, cgversions)
    else:
168
169
170
        start = util.timer()
        slices = sliceoutgoing(repo, outgoing)
        end = util.timer()
171
172
        msg = _(b'pullbundle-cache: "missing" set sliced into %d subranges '
                b'in %f seconds\n')
173
174
        repo.ui.write(msg % (len(slices), end - start))
        for sliceid, sliceout in slices:
175
176
177
178
179
            makeonecgpart(newpart, repo, sliceid, sliceout, version, source, bundlecaps,
                          filematcher, cgversions)

# stable range slicing

180
181
DEBUG = False

182
183
def sliceoutgoing(repo, outgoing):
    cl = repo.changelog
184
    rev = getgetrev(cl)
185
186
187
188
    node = cl.node
    revsort = repo.stablesort

    missingrevs = set(rev(n) for n in outgoing.missing)
189
190
191
    if DEBUG:
        ms = missingrevs.copy()
        ss = []
192
    allslices = []
193
    missingheads = [rev(n) for n in sorted(outgoing.missingheads, reverse=True)]
194
195
    for head in missingheads:
        localslices = []
196
        localmissing = set(repo.revs(b'%ld and ::%d', missingrevs, head))
197
        thisrunmissing = localmissing.copy()
198
199
200
        while localmissing:
            slicerevs = []
            for r in revsort.walkfrom(repo, head):
201
                if r not in thisrunmissing:
202
203
204
                    break
                slicerevs.append(r)
            slicenodes = [node(r) for r in slicerevs]
205
            localslices.append(canonicalslices(repo, slicenodes))
206
207
            if DEBUG:
                ss.append(slicerevs)
208
209
210
            missingrevs.difference_update(slicerevs)
            localmissing.difference_update(slicerevs)
            if localmissing:
211
                heads = list(repo.revs(b'heads(%ld)', localmissing))
212
213
214
                heads.sort(key=node)
                head = heads.pop()
                if heads:
215
                    thisrunmissing = repo.revs(b'%ld and only(%d, %ld)',
216
217
218
219
220
221
222
223
                                               localmissing,
                                               head,
                                               heads)
                else:
                    thisrunmissing = localmissing.copy()
        if DEBUG:
            for s in reversed(ss):
                ms -= set(s)
224
                missingbase = repo.revs(b'parents(%ld) and %ld', s, ms)
225
                if missingbase:
226
227
228
229
230
231
232
                    repo.ui.write_err(b'!!! rev bundled while parents missing\n')
                    repo.ui.write_err(b'    parent: %s\n' % list(missingbase))
                    pb = repo.revs(b'%ld and children(%ld)', s, missingbase)
                    repo.ui.write_err(b'    children: %s\n' % list(pb))
                    h = repo.revs(b'heads(%ld)', s)
                    repo.ui.write_err(b'    heads: %s\n' % list(h))
                    raise error.ProgrammingError(b'issuing a range before its parents')
233

234
235
        for s in reversed(localslices):
            allslices.extend(s)
236
237
    # unknown subrange might had to be computed
    repo.stablerange.save(repo)
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
    return [(rangeid, outgoingfromnodes(repo, nodes))
            for rangeid, nodes in allslices]

def canonicalslices(repo, nodes):
    depth = repo.depthcache.get
    stablerange = repo.stablerange
    rangelength = lambda x: stablerange.rangelength(repo, x)
    headrev = repo.changelog.rev(nodes[0])
    nbrevs = len(nodes)
    headdepth = depth(headrev)
    skipped = headdepth - nbrevs
    rangeid = (headrev, skipped)

    subranges = canonicalsubranges(repo, stablerange, rangeid)
    idx = 0
253
    slices = []
254
255
256
257
258
    nodes.reverse()
    for rangeid in subranges:
        size = rangelength(rangeid)
        slices.append((rangeid, nodes[idx:idx + size]))
        idx += size
259
260
261
262
263
264
265
266
267
268
    ### slow code block to validate ranges content
    # rev = repo.changelog.nodemap.get
    # for ri, ns in slices:
    #     a = set(rev(n) for n in ns)
    #     b = set(repo.stablerange.revsfromrange(repo, ri))
    #     l = repo.stablerange.rangelength(repo, ri)
    #     repo.ui.write('range-length: %d-%d %s %s\n' % (ri[0], ri[1], l, len(a)))
    #     if a != b:
    #         d =  (ri[0], ri[1], b - a, a - b)
    #         repo.ui.write("mismatching content: %d-%d -%s +%s\n" % d)
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
    return slices

def canonicalsubranges(repo, stablerange, rangeid):
    """slice a size of nodes into most reusable subranges

    We try to slice a range into a set of "largest" and "canonical" stable
    range.

    It might make sense to move this function as a 'stablerange' method.
    """
    headrev, skip = rangeid
    rangedepth = stablerange.depthrev(repo, rangeid[0])
    canonicals = []

    # 0. find the first power of 2 higher than this range depth
    cursor = 1
    while cursor <= rangedepth:
        cursor *= 2

    # 1. find first cupt
    precut = cut = 0
    while True:
        if skip <= cut:
            break
        if cut + cursor < rangedepth:
            precut = cut
            cut += cursor
        if cursor == 1:
            break
298
        cursor //= 2
299
300
301

    # 2. optimise, bottom part
    if skip != cut:
302
        currentsize = tailsize = cut - skip
303
304
        assert 0 < tailsize, tailsize

305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
        # we need to take several "standard cut" in the bottom part
        #
        # This is similar to what we will do for the top part, we reusing the
        # existing structure is a bit more complex.
        allcuts = list(reversed(standardcut(tailsize)))
        prerange = (headrev, precut)
        ### slow code block to check we operate on the right data
        # rev = repo.changelog.nodemap.get
        # allrevs = [rev(n) for n in nodes]
        # allrevs.reverse()
        # prerevs = repo.stablerange.revsfromrange(repo, prerange)
        # assert allrevs == prerevs[(len(prerevs) - len(allrevs)):]
        # end of check
        sub = list(stablerange.subranges(repo, prerange)[:-1])

        bottomranges = []
        # XXX we might be able to reuse core stable-range logic instead of
        # redoing this manually
        currentrange = sub.pop()
        currentsize = stablerange.rangelength(repo, currentrange)
        currentcut = None
        while allcuts or currentcut is not None:
            # get the next cut if needed
            if currentcut is None:
                currentcut = allcuts.pop()
            # deal attemp a cut
            if currentsize == currentcut:
                bottomranges.append(currentrange)
                currentcut = None
            elif currentsize < currentcut:
                bottomranges.append(currentrange)
                currentcut -= currentsize
            else: # currentsize > currentcut
                newskip = currentrange[1] + (currentsize - currentcut)
                currentsub = stablerange._slicesrangeat(repo, currentrange, newskip)
                bottomranges.append(currentsub.pop())
                sub.extend(currentsub)
                currentcut = None
            currentrange = sub.pop()
            currentsize = stablerange.rangelength(repo, currentrange)
        bottomranges.reverse()
        canonicals.extend(bottomranges)
347
348
349
350
351
352
353
354
355
356
357

    # 3. take recursive subrange until we get to a power of two size?
    current = (headrev, cut)
    while not poweroftwo(stablerange.rangelength(repo, current)):
        sub = stablerange.subranges(repo, current)
        canonicals.extend(sub[:-1])
        current = sub[-1]
    canonicals.append(current)

    return canonicals

358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
def standardcut(size):
    assert 0 < size
    # 0. find the first power of 2 higher than this range depth
    cut = 1
    while cut <= size:
        cut *= 2

    allcuts = []
    # 1. find all standard expected cut
    while 1 < cut and size:
        cut //= 2
        if cut <= size:
            allcuts.append(cut)
            size -= cut
    return allcuts

374
375
376
377
378
379
380
381
382
def poweroftwo(num):
    return num and not num & (num - 1)

def outgoingfromnodes(repo, nodes):
    return discovery.outgoing(repo,
                              missingroots=nodes,
                              missingheads=nodes)

# changegroup part construction
Pierre-Yves David's avatar
Pierre-Yves David committed
383
384

def _changegroupinfo(repo, nodes, source):
385
386
    if repo.ui.verbose or source == b'bundle':
        repo.ui.status(_(b"%d changesets found\n") % len(nodes))
Pierre-Yves David's avatar
Pierre-Yves David committed
387

388
389
def _makenewstream(newpart, repo, outgoing, version, source,
                   bundlecaps, filematcher, cgversions):
Pierre-Yves David's avatar
Pierre-Yves David committed
390
391
392
    old = changegroup._changegroupinfo
    try:
        changegroup._changegroupinfo = _changegroupinfo
393
394
395
        if filematcher is not None:
            cgstream = changegroup.makestream(repo, outgoing, version, source,
                                              bundlecaps=bundlecaps,
396
                                              matcher=filematcher)
397
398
399
        else:
            cgstream = changegroup.makestream(repo, outgoing, version, source,
                                              bundlecaps=bundlecaps)
Pierre-Yves David's avatar
Pierre-Yves David committed
400
401
402
    finally:
        changegroup._changegroupinfo = old

403
404
    nbchanges = len(outgoing.missing)
    pversion = None
Pierre-Yves David's avatar
Pierre-Yves David committed
405
    if cgversions:
406
407
408
409
410
411
        pversion = version
    return (cgstream, nbchanges, pversion)

def _makepartfromstream(newpart, repo, cgstream, nbchanges, version):
    # same as upstream code

412
    part = newpart(b'changegroup', data=cgstream)
413
    if version:
414
        part.addparam(b'version', version)
Pierre-Yves David's avatar
Pierre-Yves David committed
415

416
    part.addparam(b'nbchanges', b'%d' % nbchanges,
Pierre-Yves David's avatar
Pierre-Yves David committed
417
418
                  mandatory=False)

419
420
    if b'treemanifest' in repo.requirements:
        part.addparam(b'treemanifest', b'1')
421
422
423
424

# cache management

def cachedir(repo):
425
    cachedir = repo.ui.config(b'pullbundle', b'cache-directory')
426
427
    if cachedir is not None:
        return cachedir
428
    return repo.cachevfs.join(b'pullbundles')
429
430
431
432

def getcache(repo, bundlename):
    cdir = cachedir(repo)
    bundlepath = os.path.join(cdir, bundlename)
433
    if not os.path.exists(bundlepath):
434
        return None
435
436
437
438
439
    # delay file opening as much as possible this introduce a small race
    # condition if someone remove the file before we actually use it. However
    # opening too many file will not work.

    def data():
440
        with open(bundlepath, r'rb') as fd:
441
442
443
            for chunk in util.filechunkiter(fd):
                yield chunk
    return data()
444
445
446
447
448
449
450
451
452
453
454
455
456
457

def cachewriter(repo, bundlename, stream):
    cdir = cachedir(repo)
    bundlepath = os.path.join(cdir, bundlename)
    try:
        os.makedirs(cdir)
    except OSError as exc:
        if exc.errno == errno.EEXIST:
            pass
    with util.atomictempfile(bundlepath) as cachefile:
        for chunk in stream:
            cachefile.write(chunk)
            yield chunk

458
BUNDLEMASK = b"%s-%s-%010iskip-%010isize.hg"
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475

def makeonecgpart(newpart, repo, rangeid, outgoing, version, source,
                  bundlecaps, filematcher, cgversions):
    bundlename = cachedata = None
    if rangeid is not None:
        nbchanges = repo.stablerange.rangelength(repo, rangeid)
        headnode = nodemod.hex(repo.changelog.node(rangeid[0]))
        # XXX do we need to use cgversion in there?
        bundlename = BUNDLEMASK % (version, headnode, rangeid[1], nbchanges)
        cachedata = getcache(repo, bundlename)
    if cachedata is None:
        partdata = _makenewstream(newpart, repo, outgoing, version, source,
                                  bundlecaps, filematcher, cgversions)
        if bundlename is not None:
            cgstream = cachewriter(repo, bundlename, partdata[0])
            partdata = (cgstream,) + partdata[1:]
    else:
476
477
        if repo.ui.verbose or source == b'bundle':
            repo.ui.status(_(b"%d changesets found in caches\n") % nbchanges)
478
479
480
481
482
        pversion = None
        if cgversions:
            pversion = version
        partdata = (cachedata, nbchanges, pversion)
    return _makepartfromstream(newpart, repo, *partdata)
483

484
485
486
487
488
@command(b'debugpullbundlecacheoverlap',
         [(b'', b'count', 100, _(b'of "client" pulling')),
          (b'', b'min-cache', 1, _(b'minimum size of cached bundle')),
          ],
         _(b'hg debugpullbundlecacheoverlap [--client 100] REVSET'))
489
490
491
492
493
494
495
496
497
def debugpullbundlecacheoverlap(ui, repo, *revs, **opts):
    '''Display statistic on bundle cache hit

    This command "simulate pulls from multiple clients. Each using a random
    subset of revisions defined by REVSET. And display statistic about the
    overlap in bundle necessary to serve them.
    '''
    actionrevs = scmutil.revrange(repo, revs)
    if not revs:
498
        raise error.Abort(b'No revision selected')
499
    count = opts['count']
500
    min_cache = opts['min_cache']
501
502
503
504
505
506

    bundlehits = collections.defaultdict(lambda: 0)
    pullstats = []

    rlen = lambda rangeid: repo.stablerange.rangelength(repo, rangeid)

507
    repo.ui.write(b"gathering %d sample pulls within %d revisions\n"
508
                  % (count, len(actionrevs)))
509
    if 1 < min_cache:
510
        repo.ui.write(b"  not caching ranges smaller than %d changesets\n" % min_cache)
511
    for i in range(count):
512
        progress(repo.ui, b'gathering data', i, total=count)
513
514
515
516
517
        outgoing = takeonesample(repo, actionrevs)
        ranges = sliceoutgoing(repo, outgoing)
        hitranges = 0
        hitchanges = 0
        totalchanges = 0
518
        largeranges = []
519
520
521
522
523
524
        for rangeid, __ in ranges:
            length = rlen(rangeid)
            totalchanges += length
            if bundlehits[rangeid]:
                hitranges += 1
                hitchanges += rlen(rangeid)
525
526
527
528
            if min_cache <= length:
                bundlehits[rangeid] += 1
                largeranges.append(rangeid)

529
530
531
        stats = (len(outgoing.missing),
                 totalchanges,
                 hitchanges,
532
                 len(largeranges),
533
534
535
                 hitranges,
                 )
        pullstats.append(stats)
536
    progress(repo.ui, b'gathering data', None)
537
538
539
540
541
542
543
544
545
546
547
548

    sizes = []
    changesmissing = []
    totalchanges = 0
    totalcached = 0
    changesratio = []
    rangesratio = []
    bundlecount = []
    for entry in pullstats:
        sizes.append(entry[0])
        changesmissing.append(entry[1] - entry[2])
        changesratio.append(entry[2] / float(entry[1]))
549
550
551
552
        if entry[3]:
            rangesratio.append(entry[4] / float(entry[3]))
        else:
            rangesratio.append(1)
553
554
555
556
        bundlecount.append(entry[3])
        totalchanges += entry[1]
        totalcached += entry[2]

557
558
559
    cachedsizes = []
    cachedhits = []
    for rangeid, hits in bundlehits.items():
560
561
        if hits <= 0:
            continue
562
563
564
565
        length = rlen(rangeid)
        cachedsizes.append(length)
        cachedhits.append(hits)

566
    sizesdist = distribution(sizes)
567
    repo.ui.write(fmtdist(b'pull size', sizesdist))
568

569
    changesmissingdist = distribution(changesmissing)
570
    repo.ui.write(fmtdist(b'non-cached changesets', changesmissingdist))
571

572
    changesratiodist = distribution(changesratio)
573
    repo.ui.write(fmtdist(b'ratio of cached changesets', changesratiodist))
574

575
    bundlecountdist = distribution(bundlecount)
576
    repo.ui.write(fmtdist(b'bundle count', bundlecountdist))
577

578
    rangesratiodist = distribution(rangesratio)
579
    repo.ui.write(fmtdist(b'ratio of cached bundles', rangesratiodist))
580

581
582
583
    repo.ui.write(b'changesets served:\n')
    repo.ui.write(b'  total:      %7d\n' % totalchanges)
    repo.ui.write(b'  from cache: %7d (%2d%%)\n'
584
                  % (totalcached, (totalcached * 100 // totalchanges)))
585
    repo.ui.write(b'  bundle:     %7d\n' % sum(bundlecount))
586

587
    cachedsizesdist = distribution(cachedsizes)
588
    repo.ui.write(fmtdist(b'size of cached bundles', cachedsizesdist))
589
590

    cachedhitsdist = distribution(cachedhits)
591
    repo.ui.write(fmtdist(b'hit on cached bundles', cachedhitsdist))
592

593
594
595
def takeonesample(repo, revs):
    node = repo.changelog.node
    pulled = random.sample(revs, max(4, len(revs) // 1000))
596
    pulled = repo.revs(b'%ld::%ld', pulled, pulled)
597
598
599
600
601
602
603
    nodes = [node(r) for r in pulled]
    return outgoingfromnodes(repo, nodes)

def distribution(data):
    data.sort()
    length = len(data)
    return {
604
605
606
607
608
609
610
611
        b'min': data[0],
        b'10%': data[length // 10],
        b'25%': data[length // 4],
        b'50%': data[length // 2],
        b'75%': data[(length // 4) * 3],
        b'90%': data[(length // 10) * 9],
        b'95%': data[(length // 20) * 19],
        b'max': data[-1],
612
613
    }

614
615
616
617
618
619
620
621
622
STATSFORMAT = b"""%(name)s:
  min: %(min)r
  10%%: %(10%)r
  25%%: %(25%)r
  50%%: %(50%)r
  75%%: %(75%)r
  90%%: %(90%)r
  95%%: %(95%)r
  max: %(max)r
623
624
625
"""

def fmtdist(name, data):
626
627
    data[b'name'] = name
    return STATSFORMAT % data
628

629
630
631
632
633
634
635
636
637
638
639
640
# hg <= 4.6 (bec1212eceaa)
if util.safehasattr(uimod.ui, 'makeprogress'):
    def progress(ui, topic, pos, item=b"", unit=b"", total=None):
        progress = ui.makeprogress(topic, unit, total)
        if pos is not None:
            progress.update(pos, item=item)
        else:
            progress.complete()
else:
    def progress(ui, topic, pos, item=b"", unit=b"", total=None):
        ui.progress(topic, pos, item, unit, total)

641
642
643
644
645
646
647
# nodemap.get and index.[has_node|rev|get_rev]
# hg <= 5.3 (02802fa87b74)
def getgetrev(cl):
    """Returns index.get_rev or nodemap.get (for pre-5.3 Mercurial)."""
    if util.safehasattr(cl.index, 'get_rev'):
        return cl.index.get_rev
    return cl.nodemap.get