Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • mercurial/mercurial-devel
1 result
Show changes
Commits on Source (26)
  • Pierre-Yves David's avatar
    shelve: do not add the dirstate backup to the transaction · f0a3aaa07d6a
    Pierre-Yves David authored
    Otherwise the transaction will properly clean up its mess on abort… deleting the
    backup in the process.
    
    This break with dirstate-v2 that has more file than just the dirstate. The
    dirstate itself is full of various exception and is "fine" when using
    dirstate-v1.
    f0a3aaa07d6a
  • Matt Harbison's avatar
    windows: gracefully handle when the username cannot be determined · 805419729e11
    Matt Harbison authored
    This assumes implementation details, but I don't see any other way than to check
    the environment variables ourselves (which would miss out on any future
    enhancements that Python may make).  This was originally reported as
    mercurial/tortoisehg/thg#5835.
    805419729e11
  • Arseniy Alekseyev's avatar
    dirstate-v2: skip evaluation of hgignore regex on cached directories · eb02decdf0ab
    Arseniy Alekseyev authored
    By making the computation of [has_ignored_ancestor] lazy we're eliding
    its computation in the common case when none of its descendants have
    changed on disk.
    
    On a ~400k files repo, with a cached status, we saw a ~64% reduction
    in CPU time, resulting in a speedup of ~10-15% (on ZFS), and a speedup
    of ~38% of XFS (XFS has faster stat operations for some reason).
    eb02decdf0ab
  • Matt Harbison's avatar
    configitems: change the `verify.skipflags` default value to avoid a py3 crash · a5f551f8b723
    Matt Harbison authored
    The revlog and LFS modules use various `&` and `&=` operations with this value,
    which no longer treats `None` as 0.  Since nothing cares if it was actually set
    in the config or not, just default to 0 for simplicity.
    a5f551f8b723
  • Jason R. Coombs's avatar
    shelve: avoid setting overloading tmpwctx · f599a946181d
    Jason R. Coombs authored
    f599a946181d
  • Jason R. Coombs's avatar
    shelve: re-wrap now that the line fits · 52dd7a43ad5c
    Jason R. Coombs authored
    52dd7a43ad5c
  • Anton Shestakov's avatar
  • Anton Shestakov's avatar
  • Matt Harbison's avatar
    lfs: fix interpolation of int and %s in an exception case · 192949b68159
    Matt Harbison authored
    Seen in the wild in a server log when MS antivirus was quarantining a file on
    the client side.
    192949b68159
  • Raphaël Gomès's avatar
    branching: merge default into stable · a3356ab610fc
    Raphaël Gomès authored
    This marks the feature freeze for the 6.3 release
    a3356ab610fc
  • Raphaël Gomès's avatar
    branching: merge stable into default · f68d285158b2
    Raphaël Gomès authored
    f68d285158b2
  • Matt Harbison's avatar
    lfs: improve an exception message for blob corruption detected on transfer · 250d9c8aaf10
    Matt Harbison authored
    The message about the server crash originated in 0ee0a3f6a990 (after support for
    serving blobs was added), but was copied from the Facebook repo that forked
    prior to server side support.  Therefore, this message only displayed in their
    client, so it was safe to assume the server crashed.  But that was never the
    case for vanilla Mercurial, as I saw this in a server log.
    
    Also, display the blob reference so that it's easier to figure out where the
    problem was when a bunch of blobs are transferred at once.
    250d9c8aaf10
  • Matt Harbison's avatar
    revlog: drop an unused variable assignment · 8d6c8a9a91f8
    Matt Harbison authored
    It's assigned again 2 lines later.
    8d6c8a9a91f8
  • Matt Harbison's avatar
    revlog: use the user facing filename as the display_id for filelogs · 92892dff03f3
    Matt Harbison authored
    I had trouble isolating some LFS blob corruption detected by `hg verify` because
    the traceback referenced a file, but with the `data/` prefix in the `.hg/store`
    path, so it couldn't be located with the `file()` revset:
    
    ```
        Traceback (most recent call last):
          File "/mnt/d/mercurial/mercurial/revlog.py", line 3209, in verifyintegrity
            _verify_revision(self, skipflags, state, node)
          File "/mnt/d/mercurial/hgext/lfs/wrapper.py", line 246, in _verify_revision
            orig(rl, skipflags, state, node)
          File "/mnt/d/mercurial/mercurial/revlog.py", line 158, in _verify_revision
            rl.revision(node)
          File "/mnt/d/mercurial/mercurial/revlog.py", line 1816, in revision
            return self._revisiondata(nodeorrev, _df)
          File "/mnt/d/mercurial/mercurial/revlog.py", line 1870, in _revisiondata
            self.checkhash(text, node, rev=rev)
          File "/mnt/d/mercurial/mercurial/revlog.py", line 1996, in checkhash
            % (self.display_id, pycompat.bytestr(revornode))
        mercurial.error.RevlogError: integrity check failed on data/EXE/PPC/shrinksrec.exe:0
    ```
    
    (I'm a little surprised it resulted in a stacktrace instead of just a message,
    but that's a different issue.  I'm also not sure how to trigger the simplestore
    case, since IIUC, it's also a revlog based store.)
    
    It's not clear how to handle the changelog and manifest (because the user
    doesn't interact with them as a file), so those cases are left alone.  The other
    thing that would be nice to improve somehow is to indicate that the ":0" is a
    revlog revision, not the changeset revision that users are used to.  I'm not
    sure how to handle the "or node" part though.
    92892dff03f3
  • Matt Harbison's avatar
    mr-template: wrap the instructions inside a comment block · 7b6d3a9bd7be
    Matt Harbison authored
    At least in preview mode, this hides the text so the user doesn't have to delete
    it.  It's still visible in edit mode, so the user sees it.
    7b6d3a9bd7be
  • Anton Shestakov's avatar
  • Anton Shestakov's avatar
  • Pierre-Yves David's avatar
    tests: remove non-python3 line matching and tests block · 976648e20856
    Pierre-Yves David authored
    We don't support Python2 anymore
    976648e20856
  • Raphaël Gomès's avatar
    6b32d39e9a67
  • Raphaël Gomès's avatar
    rust-status: query fs traversal metadata lazily · da48f170d203
    Raphaël Gomès authored
    Currently, any time the status algorithm needs to read a directory from the
    filesystem (because the stat-only optimization is not available), it also
    stats each directory entry eagerly.
    
    Stat'ing the entries is only needed in a few cases (like when checking
    the mtime of a directory for caching): this patch creates a wrapper struct
    `DirEntry` that only stats the directory entry it represents when needed.
    
    Excerpt of an `strace` before this change on Mozilla Central:
    
    ```
    openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
    newfstatat(3, "", {st_mode=S_IFDIR|0755, st_size=3540, ...}, AT_EMPTY_PATH) = 0
    getdents64(3, 0x55dc970bd440 /* 139 entries */, 32768) = 5072
    statx(3, ".hg", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=772, ...}) = 0
    
    [... 135 other successful `statx` calls]
    
    getdents64(3, 0x55dc970bd440 /* 0 entries */, 32768) = 0
    close(3)                                = 0
    ```
    
    After this change:
    
    ```
    openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
    newfstatat(3, "", {st_mode=S_IFDIR|0755, st_size=3540, ...}, AT_EMPTY_PATH) = 0
    getdents64(3, 0x561567c10190 /* 139 entries */, 32768) = 5072
    getdents64(3, 0x561567c10190 /* 0 entries */, 32768) = 0
    close(3)                                = 0
    ```
    da48f170d203
  • Matt Harbison's avatar
    check-code: drop the check for whitespace around named parameters · 3a2b6158374a
    Matt Harbison authored
    This check flags py3 annotations of named parameters, because `black` adds
    spaces around the assignment in this case.  Since the chosen formatter has
    opinions (and pylint also wants the space in the case of annotations), drop the
    check so we can use py3 annotations.
    3a2b6158374a
  • Matt Harbison's avatar
    typing: add basic type hints to localrepo.py · 8fa3f7c3a9ad
    Matt Harbison authored
    There's a lot more that could be done, but this sticks to the obviously correct
    stuff that is either related to existing imports or primitives.  Hopefully this
    helps smoke out more path related bytes vs str issues in TortoiseHg.
    
    I'm avoiding the interfaces for now, because they seem to confuse pytype and/or
    PyCharm.  It might be worth typing the return of `makelocalrepository` to
    `localrepository`, but that leaks an implementation detail, so that can be
    revisited later.
    8fa3f7c3a9ad
  • Matt Harbison's avatar
    util: implement `writelines()` on atomictempfile · c4f07a011714
    Matt Harbison authored
    With typehints on the vfs objects, pytype will flag this:
    
        FAILED: /mnt/c/Users/Matt/hg/.pytype/pyi/mercurial/patch.pyi
        /usr/bin/python3.8 -m pytype.single
            --imports_info /mnt/c/Users/Matt/hg/.pytype/imports/mercurial.patch.imports
            --module-name mercurial.patch -V 3.7
            -o /mnt/c/Users/Matt/hg/.pytype/pyi/mercurial/patch.pyi
            --analyze-annotated --nofail --quick
            /mnt/c/Users/Matt/hg/mercurial/patch.py
        File "/mnt/c/Users/Matt/hg/mercurial/patch.py", line 535, in writerej:
            No attribute 'writelines' on mercurial.util.atomictempfile [attribute-error]
          In Union[
              mercurial.util.atomictempfile,
              mercurial.vfs.checkambigatclosing,
              mercurial.vfs.delayclosedfile,
              mercurial.windows.fdproxy,
              mercurial.windows.mixedfilemodewrapper
          ]
    
    It's not a real problem there (atomictempfile is only created by passing
    different args), but it's reasonable for this to implement the function and
    behave like a normal file.  There are other functions missing that can be added
    if/when needed.
    c4f07a011714
  • Matt Harbison's avatar
    typing: add basic type hints to vfs.py · cc9a60050a07
    Matt Harbison authored
    Again, there's a lot more that could be done, but this sticks to the obviously
    correct stuff that is related to primitives or `vfs` objects.  Hopefully this
    helps smoke out more path related bytes vs str issues in TortoiseHg.
    
    PyCharm seems smart enough to apply hints from annotated superclass functions,
    but pytype isn't (according to the *.pyi file generated), so those are annotated
    too.
    
    There was some discussion about changing the default path arg from `None` to
    `b''` in order to avoid the more verbose `Optional` declarations.  This would be
    more in line with `os.path.join()` (which rejects `None`, but ignores empty
    strings), and still not change the behavior for callers still passing `None`
    (because the check is `if path` instead of an explicit check for `None`).  But I
    didn't want to hold this up while discussing that, so this documents what _is_.
    cc9a60050a07
  • Matt Harbison's avatar
    vfs: make the default opener mode binary · 2506c3ac73f4
    Matt Harbison authored
    The default was already binary for `abstractvfs`, and the `vfs` implementation
    adds binary mode if the caller didn't supply it.  Therefore, it should be safe
    for all vfs objects (and I don't think we want text reads anyway).
    2506c3ac73f4
  • Matt Harbison's avatar
    typing: add basic type hints to stringutil.py · bbbb5213d043
    Matt Harbison authored
    bbbb5213d043
Showing
with 200 additions and 136 deletions
/assign_reviewer @mercurial.review /assign_reviewer @mercurial.review
<!--
Welcome to the Mercurial Merge Request creation process: Welcome to the Mercurial Merge Request creation process:
* Set a simple title for your MR, * Set a simple title for your MR,
...@@ -11,3 +14,5 @@ ...@@ -11,3 +14,5 @@
* https://www.mercurial-scm.org/wiki/ContributingChanges * https://www.mercurial-scm.org/wiki/ContributingChanges
* https://www.mercurial-scm.org/wiki/Heptapod * https://www.mercurial-scm.org/wiki/Heptapod
-->
...@@ -372,10 +372,6 @@ ...@@ -372,10 +372,6 @@
), ),
(r'[^^+=*/!<>&| %-](\s=|=\s)[^= ]', "wrong whitespace around ="), (r'[^^+=*/!<>&| %-](\s=|=\s)[^= ]', "wrong whitespace around ="),
( (
r'\([^()]*( =[^=]|[^<>!=]= )',
"no whitespace around = for named parameters",
),
(
r'raise [^,(]+, (\([^\)]+\)|[^,\(\)]+)$', r'raise [^,(]+, (\([^\)]+\)|[^,\(\)]+)$',
"don't use old-style two-argument raise, use Exception(message)", "don't use old-style two-argument raise, use Exception(message)",
), ),
......
...@@ -23,8 +23,6 @@ ...@@ -23,8 +23,6 @@
enabled. enabled.
""" """
# This line is unnecessary, but it satisfies test-check-py3-compat.t.
import contextlib import contextlib
import importlib.util import importlib.util
import sys import sys
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
import inspect import inspect
import math import math
import os import os
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
def parse_version(vstr): def parse_version(vstr):
res = 0 res = 0
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
import sys import sys
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
import sys import sys
from . import compat from . import compat
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
import ctypes import ctypes
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# no unicode literals
import binascii import binascii
import collections import collections
import ctypes import ctypes
......
...@@ -168,5 +168,9 @@ ...@@ -168,5 +168,9 @@
# producing the response (but the server has no way of telling us # producing the response (but the server has no way of telling us
# that), and we really don't need to try to write the response to # that), and we really don't need to try to write the response to
# the localstore, because it's not going to match the expected. # the localstore, because it's not going to match the expected.
# The server also uses this method to store data uploaded by the
# client, so if this happens on the server side, it's possible
# that the client crashed or an antivirus interfered with the
# upload.
if content_length is not None and int(content_length) != size: if content_length is not None and int(content_length) != size:
msg = ( msg = (
...@@ -171,5 +175,5 @@ ...@@ -171,5 +175,5 @@
if content_length is not None and int(content_length) != size: if content_length is not None and int(content_length) != size:
msg = ( msg = (
b"Response length (%s) does not match Content-Length " b"Response length (%d) does not match Content-Length "
b"header (%d): likely server-side crash" b"header (%d) for %s"
) )
...@@ -175,5 +179,5 @@ ...@@ -175,5 +179,5 @@
) )
raise LfsRemoteError(_(msg) % (size, int(content_length))) raise LfsRemoteError(_(msg) % (size, int(content_length), oid))
realoid = hex(sha256.digest()) realoid = hex(sha256.digest())
if realoid != oid: if realoid != oid:
......
...@@ -2569,7 +2569,7 @@ ...@@ -2569,7 +2569,7 @@
coreconfigitem( coreconfigitem(
b'verify', b'verify',
b'skipflags', b'skipflags',
default=None, default=0,
) )
coreconfigitem( coreconfigitem(
b'web', b'web',
......
...@@ -15,6 +15,10 @@ ...@@ -15,6 +15,10 @@
import weakref import weakref
from concurrent import futures from concurrent import futures
from typing import (
Optional,
)
from .i18n import _ from .i18n import _
from .node import ( from .node import (
bin, bin,
...@@ -526,7 +530,7 @@ ...@@ -526,7 +530,7 @@
return set(read(b'requires').splitlines()) return set(read(b'requires').splitlines())
def makelocalrepository(baseui, path, intents=None): def makelocalrepository(baseui, path: bytes, intents=None):
"""Create a local repository object. """Create a local repository object.
Given arguments needed to construct a local repository, this function Given arguments needed to construct a local repository, this function
...@@ -845,7 +849,13 @@ ...@@ -845,7 +849,13 @@
) )
def loadhgrc(ui, wdirvfs, hgvfs, requirements, sharedvfs=None): def loadhgrc(
ui,
wdirvfs: vfsmod.vfs,
hgvfs: vfsmod.vfs,
requirements,
sharedvfs: Optional[vfsmod.vfs] = None,
):
"""Load hgrc files/content into a ui instance. """Load hgrc files/content into a ui instance.
This is called during repository opening to load any additional This is called during repository opening to load any additional
...@@ -1323,8 +1333,8 @@ ...@@ -1323,8 +1333,8 @@
self, self,
baseui, baseui,
ui, ui,
origroot, origroot: bytes,
wdirvfs, wdirvfs: vfsmod.vfs,
hgvfs, hgvfs: vfsmod.vfs,
requirements, requirements,
supportedrequirements, supportedrequirements,
...@@ -1329,4 +1339,4 @@ ...@@ -1329,4 +1339,4 @@
requirements, requirements,
supportedrequirements, supportedrequirements,
sharedpath, sharedpath: bytes,
store, store,
...@@ -1332,6 +1342,6 @@ ...@@ -1332,6 +1342,6 @@
store, store,
cachevfs, cachevfs: vfsmod.vfs,
wcachevfs, wcachevfs: vfsmod.vfs,
features, features,
intents=None, intents=None,
): ):
...@@ -1977,7 +1987,7 @@ ...@@ -1977,7 +1987,7 @@
def __iter__(self): def __iter__(self):
return iter(self.changelog) return iter(self.changelog)
def revs(self, expr, *args): def revs(self, expr: bytes, *args):
"""Find revisions matching a revset. """Find revisions matching a revset.
The revset is specified as a string ``expr`` that may contain The revset is specified as a string ``expr`` that may contain
...@@ -1993,7 +2003,7 @@ ...@@ -1993,7 +2003,7 @@
tree = revsetlang.spectree(expr, *args) tree = revsetlang.spectree(expr, *args)
return revset.makematcher(tree)(self) return revset.makematcher(tree)(self)
def set(self, expr, *args): def set(self, expr: bytes, *args):
"""Find revisions matching a revset and emit changectx instances. """Find revisions matching a revset and emit changectx instances.
This is a convenience wrapper around ``revs()`` that iterates the This is a convenience wrapper around ``revs()`` that iterates the
...@@ -2005,7 +2015,7 @@ ...@@ -2005,7 +2015,7 @@
for r in self.revs(expr, *args): for r in self.revs(expr, *args):
yield self[r] yield self[r]
def anyrevs(self, specs, user=False, localalias=None): def anyrevs(self, specs: bytes, user=False, localalias=None):
"""Find revisions matching one of the given revsets. """Find revisions matching one of the given revsets.
Revset aliases from the configuration are not expanded by default. To Revset aliases from the configuration are not expanded by default. To
...@@ -2030,7 +2040,7 @@ ...@@ -2030,7 +2040,7 @@
m = revset.matchany(None, specs, localalias=localalias) m = revset.matchany(None, specs, localalias=localalias)
return m(self) return m(self)
def url(self): def url(self) -> bytes:
return b'file:' + self.root return b'file:' + self.root
def hook(self, name, throw=False, **args): def hook(self, name, throw=False, **args):
...@@ -2229,7 +2239,7 @@ ...@@ -2229,7 +2239,7 @@
return b'store' return b'store'
return None return None
def wjoin(self, f, *insidef): def wjoin(self, f: bytes, *insidef: bytes) -> bytes:
return self.vfs.reljoin(self.root, f, *insidef) return self.vfs.reljoin(self.root, f, *insidef)
def setparents(self, p1, p2=None): def setparents(self, p1, p2=None):
...@@ -2238,10 +2248,10 @@ ...@@ -2238,10 +2248,10 @@
self[None].setparents(p1, p2) self[None].setparents(p1, p2)
self._quick_access_changeid_invalidate() self._quick_access_changeid_invalidate()
def filectx(self, path, changeid=None, fileid=None, changectx=None): def filectx(self, path: bytes, changeid=None, fileid=None, changectx=None):
"""changeid must be a changeset revision, if specified. """changeid must be a changeset revision, if specified.
fileid can be a file revision or node.""" fileid can be a file revision or node."""
return context.filectx( return context.filectx(
self, path, changeid, fileid, changectx=changectx self, path, changeid, fileid, changectx=changectx
) )
...@@ -2242,9 +2252,9 @@ ...@@ -2242,9 +2252,9 @@
"""changeid must be a changeset revision, if specified. """changeid must be a changeset revision, if specified.
fileid can be a file revision or node.""" fileid can be a file revision or node."""
return context.filectx( return context.filectx(
self, path, changeid, fileid, changectx=changectx self, path, changeid, fileid, changectx=changectx
) )
def getcwd(self): def getcwd(self) -> bytes:
return self.dirstate.getcwd() return self.dirstate.getcwd()
...@@ -2249,6 +2259,6 @@ ...@@ -2249,6 +2259,6 @@
return self.dirstate.getcwd() return self.dirstate.getcwd()
def pathto(self, f, cwd=None): def pathto(self, f: bytes, cwd: Optional[bytes] = None) -> bytes:
return self.dirstate.pathto(f, cwd) return self.dirstate.pathto(f, cwd)
def _loadfilter(self, filter): def _loadfilter(self, filter):
...@@ -2300,10 +2310,10 @@ ...@@ -2300,10 +2310,10 @@
def adddatafilter(self, name, filter): def adddatafilter(self, name, filter):
self._datafilters[name] = filter self._datafilters[name] = filter
def wread(self, filename): def wread(self, filename: bytes) -> bytes:
if self.wvfs.islink(filename): if self.wvfs.islink(filename):
data = self.wvfs.readlink(filename) data = self.wvfs.readlink(filename)
else: else:
data = self.wvfs.read(filename) data = self.wvfs.read(filename)
return self._filter(self._encodefilterpats, filename, data) return self._filter(self._encodefilterpats, filename, data)
...@@ -2304,10 +2314,17 @@ ...@@ -2304,10 +2314,17 @@
if self.wvfs.islink(filename): if self.wvfs.islink(filename):
data = self.wvfs.readlink(filename) data = self.wvfs.readlink(filename)
else: else:
data = self.wvfs.read(filename) data = self.wvfs.read(filename)
return self._filter(self._encodefilterpats, filename, data) return self._filter(self._encodefilterpats, filename, data)
def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs): def wwrite(
self,
filename: bytes,
data: bytes,
flags: bytes,
backgroundclose=False,
**kwargs
) -> int:
"""write ``data`` into ``filename`` in the working directory """write ``data`` into ``filename`` in the working directory
This returns length of written (maybe decoded) data. This returns length of written (maybe decoded) data.
...@@ -2325,7 +2342,7 @@ ...@@ -2325,7 +2342,7 @@
self.wvfs.setflags(filename, False, False) self.wvfs.setflags(filename, False, False)
return len(data) return len(data)
def wwritedata(self, filename, data): def wwritedata(self, filename: bytes, data: bytes) -> bytes:
return self._filter(self._decodefilterpats, filename, data) return self._filter(self._decodefilterpats, filename, data)
def currenttransaction(self): def currenttransaction(self):
...@@ -3520,9 +3537,9 @@ ...@@ -3520,9 +3537,9 @@
return a return a
def undoname(fn): def undoname(fn: bytes) -> bytes:
base, name = os.path.split(fn) base, name = os.path.split(fn)
assert name.startswith(b'journal') assert name.startswith(b'journal')
return os.path.join(base, name.replace(b'journal', b'undo', 1)) return os.path.join(base, name.replace(b'journal', b'undo', 1))
...@@ -3524,9 +3541,9 @@ ...@@ -3524,9 +3541,9 @@
base, name = os.path.split(fn) base, name = os.path.split(fn)
assert name.startswith(b'journal') assert name.startswith(b'journal')
return os.path.join(base, name.replace(b'journal', b'undo', 1)) return os.path.join(base, name.replace(b'journal', b'undo', 1))
def instance(ui, path, create, intents=None, createopts=None): def instance(ui, path: bytes, create, intents=None, createopts=None):
# prevent cyclic import localrepo -> upgrade -> localrepo # prevent cyclic import localrepo -> upgrade -> localrepo
from . import upgrade from . import upgrade
...@@ -3543,7 +3560,7 @@ ...@@ -3543,7 +3560,7 @@
return repo return repo
def islocal(path): def islocal(path: bytes) -> bool:
return True return True
...@@ -3801,7 +3818,7 @@ ...@@ -3801,7 +3818,7 @@
return {k: v for k, v in createopts.items() if k not in known} return {k: v for k, v in createopts.items() if k not in known}
def createrepository(ui, path, createopts=None, requirements=None): def createrepository(ui, path: bytes, createopts=None, requirements=None):
"""Create a new repository in a vfs. """Create a new repository in a vfs.
``path`` path to the new repo's working directory. ``path`` path to the new repo's working directory.
......
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
FLAG_INLINE_DATA, FLAG_INLINE_DATA,
INDEX_HEADER, INDEX_HEADER,
KIND_CHANGELOG, KIND_CHANGELOG,
KIND_FILELOG,
RANK_UNKNOWN, RANK_UNKNOWN,
REVLOGV0, REVLOGV0,
REVLOGV1, REVLOGV1,
...@@ -505,7 +506,6 @@ ...@@ -505,7 +506,6 @@
self._docket = docket self._docket = docket
self._docket_file = entry_point self._docket_file = entry_point
else: else:
entry_data = b''
self._initempty = True self._initempty = True
entry_data = self._get_data(entry_point, mmapindexthreshold) entry_data = self._get_data(entry_point, mmapindexthreshold)
if len(entry_data) > 0: if len(entry_data) > 0:
...@@ -653,9 +653,12 @@ ...@@ -653,9 +653,12 @@
@util.propertycache @util.propertycache
def display_id(self): def display_id(self):
"""The public facing "ID" of the revlog that we use in message""" """The public facing "ID" of the revlog that we use in message"""
# Maybe we should build a user facing representation of if self.revlog_kind == KIND_FILELOG:
# revlog.target instead of using `self.radix` # Reference the file without the "data/" prefix, so it is familiar
return self.radix # to the user.
return self.target[1]
else:
return self.radix
def _get_decompressor(self, t): def _get_decompressor(self, t):
try: try:
......
...@@ -425,7 +425,7 @@ ...@@ -425,7 +425,7 @@
def _aborttransaction(repo, tr): def _aborttransaction(repo, tr):
"""Abort current transaction for shelve/unshelve, but keep dirstate""" """Abort current transaction for shelve/unshelve, but keep dirstate"""
dirstatebackupname = b'dirstate.shelve' dirstatebackupname = b'dirstate.shelve'
repo.dirstate.savebackup(tr, dirstatebackupname) repo.dirstate.savebackup(None, dirstatebackupname)
tr.abort() tr.abort()
repo.dirstate.restorebackup(None, dirstatebackupname) repo.dirstate.restorebackup(None, dirstatebackupname)
...@@ -1177,7 +1177,6 @@ ...@@ -1177,7 +1177,6 @@
oldtiprev = len(repo) oldtiprev = len(repo)
pctx = repo[b'.'] pctx = repo[b'.']
tmpwctx = pctx
# The goal is to have a commit structure like so: # The goal is to have a commit structure like so:
# ...-> pctx -> tmpwctx -> shelvectx # ...-> pctx -> tmpwctx -> shelvectx
# where tmpwctx is an optional commit with the user's pending changes # where tmpwctx is an optional commit with the user's pending changes
...@@ -1185,9 +1184,7 @@ ...@@ -1185,9 +1184,7 @@
# to the original pctx. # to the original pctx.
activebookmark = _backupactivebookmark(repo) activebookmark = _backupactivebookmark(repo)
tmpwctx, addedbefore = _commitworkingcopychanges( tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts, pctx)
ui, repo, opts, tmpwctx
)
repo, shelvectx = _unshelverestorecommit(ui, repo, tr, basename) repo, shelvectx = _unshelverestorecommit(ui, repo, tr, basename)
_checkunshelveuntrackedproblems(ui, repo, shelvectx) _checkunshelveuntrackedproblems(ui, repo, shelvectx)
branchtorestore = b'' branchtorestore = b''
......
...@@ -2542,6 +2542,7 @@ ...@@ -2542,6 +2542,7 @@
# delegated methods # delegated methods
self.read = self._fp.read self.read = self._fp.read
self.write = self._fp.write self.write = self._fp.write
self.writelines = self._fp.writelines
self.seek = self._fp.seek self.seek = self._fp.seek
self.tell = self._fp.tell self.tell = self._fp.tell
self.fileno = self._fp.fileno self.fileno = self._fp.fileno
......
...@@ -14,6 +14,11 @@ ...@@ -14,6 +14,11 @@
import textwrap import textwrap
import types import types
from typing import (
Optional,
overload,
)
from ..i18n import _ from ..i18n import _
from ..thirdparty import attr from ..thirdparty import attr
...@@ -30,6 +35,16 @@ ...@@ -30,6 +35,16 @@
regexbytesescapemap = {i: (b'\\' + i) for i in _respecial} regexbytesescapemap = {i: (b'\\' + i) for i in _respecial}
@overload
def reescape(pat: bytes) -> bytes:
...
@overload
def reescape(pat: str) -> str:
...
def reescape(pat): def reescape(pat):
"""Drop-in replacement for re.escape.""" """Drop-in replacement for re.escape."""
# NOTE: it is intentional that this works on unicodes and not # NOTE: it is intentional that this works on unicodes and not
...@@ -45,8 +60,8 @@ ...@@ -45,8 +60,8 @@
return pat.encode('latin1') return pat.encode('latin1')
def pprint(o, bprefix=False, indent=0, level=0): def pprint(o, bprefix: bool = False, indent: int = 0, level: int = 0) -> bytes:
"""Pretty print an object.""" """Pretty print an object."""
return b''.join(pprintgen(o, bprefix=bprefix, indent=indent, level=level)) return b''.join(pprintgen(o, bprefix=bprefix, indent=indent, level=level))
...@@ -49,8 +64,8 @@ ...@@ -49,8 +64,8 @@
"""Pretty print an object.""" """Pretty print an object."""
return b''.join(pprintgen(o, bprefix=bprefix, indent=indent, level=level)) return b''.join(pprintgen(o, bprefix=bprefix, indent=indent, level=level))
def pprintgen(o, bprefix=False, indent=0, level=0): def pprintgen(o, bprefix: bool = False, indent: int = 0, level: int = 0):
"""Pretty print an object to a generator of atoms. """Pretty print an object to a generator of atoms.
``bprefix`` is a flag influencing whether bytestrings are preferred with ``bprefix`` is a flag influencing whether bytestrings are preferred with
...@@ -250,7 +265,7 @@ ...@@ -250,7 +265,7 @@
yield pycompat.byterepr(o) yield pycompat.byterepr(o)
def prettyrepr(o): def prettyrepr(o) -> bytes:
"""Pretty print a representation of a possibly-nested object""" """Pretty print a representation of a possibly-nested object"""
lines = [] lines = []
rs = pycompat.byterepr(o) rs = pycompat.byterepr(o)
...@@ -281,7 +296,7 @@ ...@@ -281,7 +296,7 @@
return b'\n'.join(b' ' * l + s for l, s in lines) return b'\n'.join(b' ' * l + s for l, s in lines)
def buildrepr(r): def buildrepr(r) -> bytes:
"""Format an optional printable representation from unexpanded bits """Format an optional printable representation from unexpanded bits
======== ================================= ======== =================================
...@@ -305,8 +320,8 @@ ...@@ -305,8 +320,8 @@
return pprint(r) return pprint(r)
def binary(s): def binary(s: bytes) -> bool:
"""return true if a string is binary data""" """return true if a string is binary data"""
return bool(s and b'\0' in s) return bool(s and b'\0' in s)
...@@ -309,8 +324,8 @@ ...@@ -309,8 +324,8 @@
"""return true if a string is binary data""" """return true if a string is binary data"""
return bool(s and b'\0' in s) return bool(s and b'\0' in s)
def _splitpattern(pattern): def _splitpattern(pattern: bytes):
if pattern.startswith(b're:'): if pattern.startswith(b're:'):
return b're', pattern[3:] return b're', pattern[3:]
elif pattern.startswith(b'literal:'): elif pattern.startswith(b'literal:'):
...@@ -318,7 +333,7 @@ ...@@ -318,7 +333,7 @@
return b'literal', pattern return b'literal', pattern
def stringmatcher(pattern, casesensitive=True): def stringmatcher(pattern: bytes, casesensitive: bool = True):
""" """
accepts a string, possibly starting with 're:' or 'literal:' prefix. accepts a string, possibly starting with 're:' or 'literal:' prefix.
returns the matcher name, pattern, and matcher function. returns the matcher name, pattern, and matcher function.
...@@ -379,7 +394,7 @@ ...@@ -379,7 +394,7 @@
raise error.ProgrammingError(b'unhandled pattern kind: %s' % kind) raise error.ProgrammingError(b'unhandled pattern kind: %s' % kind)
def substringregexp(pattern, flags=0): def substringregexp(pattern: bytes, flags: int = 0):
"""Build a regexp object from a string pattern possibly starting with """Build a regexp object from a string pattern possibly starting with
're:' or 'literal:' prefix. 're:' or 'literal:' prefix.
...@@ -431,7 +446,7 @@ ...@@ -431,7 +446,7 @@
raise error.ProgrammingError(b'unhandled pattern kind: %s' % kind) raise error.ProgrammingError(b'unhandled pattern kind: %s' % kind)
def shortuser(user): def shortuser(user: bytes) -> bytes:
"""Return a short representation of a user name or email address.""" """Return a short representation of a user name or email address."""
f = user.find(b'@') f = user.find(b'@')
if f >= 0: if f >= 0:
...@@ -448,7 +463,7 @@ ...@@ -448,7 +463,7 @@
return user return user
def emailuser(user): def emailuser(user: bytes) -> bytes:
"""Return the user portion of an email address.""" """Return the user portion of an email address."""
f = user.find(b'@') f = user.find(b'@')
if f >= 0: if f >= 0:
...@@ -459,7 +474,7 @@ ...@@ -459,7 +474,7 @@
return user return user
def email(author): def email(author: bytes) -> bytes:
'''get email of author.''' '''get email of author.'''
r = author.find(b'>') r = author.find(b'>')
if r == -1: if r == -1:
...@@ -467,7 +482,7 @@ ...@@ -467,7 +482,7 @@
return author[author.find(b'<') + 1 : r] return author[author.find(b'<') + 1 : r]
def person(author): def person(author: bytes) -> bytes:
"""Returns the name before an email address, """Returns the name before an email address,
interpreting it as per RFC 5322 interpreting it as per RFC 5322
...@@ -612,7 +627,7 @@ ...@@ -612,7 +627,7 @@
return mailmap return mailmap
def mapname(mailmap, author): def mapname(mailmap, author: bytes) -> bytes:
"""Returns the author field according to the mailmap cache, or """Returns the author field according to the mailmap cache, or
the original author field. the original author field.
...@@ -663,7 +678,7 @@ ...@@ -663,7 +678,7 @@
_correctauthorformat = remod.compile(br'^[^<]+\s<[^<>]+@[^<>]+>$') _correctauthorformat = remod.compile(br'^[^<]+\s<[^<>]+@[^<>]+>$')
def isauthorwellformed(author): def isauthorwellformed(author: bytes) -> bool:
"""Return True if the author field is well formed """Return True if the author field is well formed
(ie "Contributor Name <contrib@email.dom>") (ie "Contributor Name <contrib@email.dom>")
...@@ -685,7 +700,7 @@ ...@@ -685,7 +700,7 @@
return _correctauthorformat.match(author) is not None return _correctauthorformat.match(author) is not None
def firstline(text): def firstline(text: bytes) -> bytes:
"""Return the first line of the input""" """Return the first line of the input"""
# Try to avoid running splitlines() on the whole string # Try to avoid running splitlines() on the whole string
i = text.find(b'\n') i = text.find(b'\n')
...@@ -697,8 +712,8 @@ ...@@ -697,8 +712,8 @@
return b'' return b''
def ellipsis(text, maxlength=400): def ellipsis(text: bytes, maxlength: int = 400) -> bytes:
"""Trim string to at most maxlength (default: 400) columns in display.""" """Trim string to at most maxlength (default: 400) columns in display."""
return encoding.trim(text, maxlength, ellipsis=b'...') return encoding.trim(text, maxlength, ellipsis=b'...')
...@@ -701,8 +716,9 @@ ...@@ -701,8 +716,9 @@
"""Trim string to at most maxlength (default: 400) columns in display.""" """Trim string to at most maxlength (default: 400) columns in display."""
return encoding.trim(text, maxlength, ellipsis=b'...') return encoding.trim(text, maxlength, ellipsis=b'...')
def escapestr(s): def escapestr(s: bytes) -> bytes:
# "bytes" is also a typing shortcut for bytes, bytearray, and memoryview
if isinstance(s, memoryview): if isinstance(s, memoryview):
s = bytes(s) s = bytes(s)
# call underlying function of s.encode('string_escape') directly for # call underlying function of s.encode('string_escape') directly for
...@@ -710,7 +726,7 @@ ...@@ -710,7 +726,7 @@
return codecs.escape_encode(s)[0] # pytype: disable=module-attr return codecs.escape_encode(s)[0] # pytype: disable=module-attr
def unescapestr(s): def unescapestr(s: bytes) -> bytes:
return codecs.escape_decode(s)[0] # pytype: disable=module-attr return codecs.escape_decode(s)[0] # pytype: disable=module-attr
...@@ -724,7 +740,7 @@ ...@@ -724,7 +740,7 @@
return pycompat.bytestr(encoding.strtolocal(str(obj))) return pycompat.bytestr(encoding.strtolocal(str(obj)))
def uirepr(s): def uirepr(s: bytes) -> bytes:
# Avoid double backslash in Windows path repr() # Avoid double backslash in Windows path repr()
return pycompat.byterepr(pycompat.bytestr(s)).replace(b'\\\\', b'\\') return pycompat.byterepr(pycompat.bytestr(s)).replace(b'\\\\', b'\\')
...@@ -838,7 +854,9 @@ ...@@ -838,7 +854,9 @@
return tw(**kwargs) return tw(**kwargs)
def wrap(line, width, initindent=b'', hangindent=b''): def wrap(
line: bytes, width: int, initindent: bytes = b'', hangindent: bytes = b''
) -> bytes:
maxindent = max(len(hangindent), len(initindent)) maxindent = max(len(hangindent), len(initindent))
if width <= maxindent: if width <= maxindent:
# adjust for weird terminal size # adjust for weird terminal size
...@@ -875,7 +893,7 @@ ...@@ -875,7 +893,7 @@
} }
def parsebool(s): def parsebool(s: bytes) -> Optional[bool]:
"""Parse s into a boolean. """Parse s into a boolean.
If s is not a valid boolean, returns None. If s is not a valid boolean, returns None.
...@@ -883,7 +901,8 @@ ...@@ -883,7 +901,8 @@
return _booleans.get(s.lower(), None) return _booleans.get(s.lower(), None)
def parselist(value): # TODO: make arg mandatory (and fix code below?)
def parselist(value: Optional[bytes]):
"""parse a configuration value as a list of comma/space separated strings """parse a configuration value as a list of comma/space separated strings
>>> parselist(b'this,is "a small" ,test') >>> parselist(b'this,is "a small" ,test')
...@@ -973,7 +992,7 @@ ...@@ -973,7 +992,7 @@
return result or [] return result or []
def evalpythonliteral(s): def evalpythonliteral(s: bytes):
"""Evaluate a string containing a Python literal expression""" """Evaluate a string containing a Python literal expression"""
# We could backport our tokenizer hack to rewrite '' to u'' if we want # We could backport our tokenizer hack to rewrite '' to u'' if we want
return ast.literal_eval(s.decode('latin1')) return ast.literal_eval(s.decode('latin1'))
...@@ -11,6 +11,10 @@ ...@@ -11,6 +11,10 @@
import stat import stat
import threading import threading
from typing import (
Optional,
)
from .i18n import _ from .i18n import _
from .pycompat import ( from .pycompat import (
delattr, delattr,
...@@ -26,7 +30,7 @@ ...@@ -26,7 +30,7 @@
) )
def _avoidambig(path, oldstat): def _avoidambig(path: bytes, oldstat):
"""Avoid file stat ambiguity forcibly """Avoid file stat ambiguity forcibly
This function causes copying ``path`` file, if it is owned by This function causes copying ``path`` file, if it is owned by
...@@ -60,6 +64,7 @@ ...@@ -60,6 +64,7 @@
'''Prevent instantiation; don't call this from subclasses.''' '''Prevent instantiation; don't call this from subclasses.'''
raise NotImplementedError('attempted instantiating ' + str(type(self))) raise NotImplementedError('attempted instantiating ' + str(type(self)))
def __call__(self, path, mode=b'rb', **kwargs): # TODO: type return, which is util.posixfile wrapped by a proxy
def __call__(self, path: bytes, mode: bytes = b'rb', **kwargs):
raise NotImplementedError raise NotImplementedError
...@@ -64,5 +69,5 @@ ...@@ -64,5 +69,5 @@
raise NotImplementedError raise NotImplementedError
def _auditpath(self, path, mode): def _auditpath(self, path: bytes, mode: bytes):
raise NotImplementedError raise NotImplementedError
...@@ -67,5 +72,5 @@ ...@@ -67,5 +72,5 @@
raise NotImplementedError raise NotImplementedError
def join(self, path, *insidef): def join(self, path: Optional[bytes], *insidef: bytes) -> bytes:
raise NotImplementedError raise NotImplementedError
...@@ -70,6 +75,6 @@ ...@@ -70,6 +75,6 @@
raise NotImplementedError raise NotImplementedError
def tryread(self, path): def tryread(self, path: bytes) -> bytes:
'''gracefully return an empty string for missing files''' '''gracefully return an empty string for missing files'''
try: try:
return self.read(path) return self.read(path)
...@@ -77,7 +82,7 @@ ...@@ -77,7 +82,7 @@
pass pass
return b"" return b""
def tryreadlines(self, path, mode=b'rb'): def tryreadlines(self, path: bytes, mode: bytes = b'rb'):
'''gracefully return an empty array for missing files''' '''gracefully return an empty array for missing files'''
try: try:
return self.readlines(path, mode=mode) return self.readlines(path, mode=mode)
...@@ -95,7 +100,7 @@ ...@@ -95,7 +100,7 @@
""" """
return self.__call__ return self.__call__
def read(self, path): def read(self, path: bytes) -> bytes:
with self(path, b'rb') as fp: with self(path, b'rb') as fp:
return fp.read() return fp.read()
...@@ -99,7 +104,7 @@ ...@@ -99,7 +104,7 @@
with self(path, b'rb') as fp: with self(path, b'rb') as fp:
return fp.read() return fp.read()
def readlines(self, path, mode=b'rb'): def readlines(self, path: bytes, mode: bytes = b'rb'):
with self(path, mode=mode) as fp: with self(path, mode=mode) as fp:
return fp.readlines() return fp.readlines()
...@@ -103,7 +108,9 @@ ...@@ -103,7 +108,9 @@
with self(path, mode=mode) as fp: with self(path, mode=mode) as fp:
return fp.readlines() return fp.readlines()
def write(self, path, data, backgroundclose=False, **kwargs): def write(
self, path: bytes, data: bytes, backgroundclose=False, **kwargs
) -> int:
with self(path, b'wb', backgroundclose=backgroundclose, **kwargs) as fp: with self(path, b'wb', backgroundclose=backgroundclose, **kwargs) as fp:
return fp.write(data) return fp.write(data)
...@@ -107,7 +114,9 @@ ...@@ -107,7 +114,9 @@
with self(path, b'wb', backgroundclose=backgroundclose, **kwargs) as fp: with self(path, b'wb', backgroundclose=backgroundclose, **kwargs) as fp:
return fp.write(data) return fp.write(data)
def writelines(self, path, data, mode=b'wb', notindexed=False): def writelines(
self, path: bytes, data: bytes, mode: bytes = b'wb', notindexed=False
) -> None:
with self(path, mode=mode, notindexed=notindexed) as fp: with self(path, mode=mode, notindexed=notindexed) as fp:
return fp.writelines(data) return fp.writelines(data)
...@@ -111,7 +120,7 @@ ...@@ -111,7 +120,7 @@
with self(path, mode=mode, notindexed=notindexed) as fp: with self(path, mode=mode, notindexed=notindexed) as fp:
return fp.writelines(data) return fp.writelines(data)
def append(self, path, data): def append(self, path: bytes, data: bytes) -> int:
with self(path, b'ab') as fp: with self(path, b'ab') as fp:
return fp.write(data) return fp.write(data)
...@@ -115,9 +124,9 @@ ...@@ -115,9 +124,9 @@
with self(path, b'ab') as fp: with self(path, b'ab') as fp:
return fp.write(data) return fp.write(data)
def basename(self, path): def basename(self, path: bytes) -> bytes:
"""return base element of a path (as os.path.basename would do) """return base element of a path (as os.path.basename would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.basename(path) return os.path.basename(path)
...@@ -119,8 +128,8 @@ ...@@ -119,8 +128,8 @@
"""return base element of a path (as os.path.basename would do) """return base element of a path (as os.path.basename would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.basename(path) return os.path.basename(path)
def chmod(self, path, mode): def chmod(self, path: bytes, mode: int) -> None:
return os.chmod(self.join(path), mode) return os.chmod(self.join(path), mode)
...@@ -125,8 +134,8 @@ ...@@ -125,8 +134,8 @@
return os.chmod(self.join(path), mode) return os.chmod(self.join(path), mode)
def dirname(self, path): def dirname(self, path: bytes) -> bytes:
"""return dirname element of a path (as os.path.dirname would do) """return dirname element of a path (as os.path.dirname would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.dirname(path) return os.path.dirname(path)
...@@ -128,11 +137,11 @@ ...@@ -128,11 +137,11 @@
"""return dirname element of a path (as os.path.dirname would do) """return dirname element of a path (as os.path.dirname would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.dirname(path) return os.path.dirname(path)
def exists(self, path=None): def exists(self, path: Optional[bytes] = None) -> bool:
return os.path.exists(self.join(path)) return os.path.exists(self.join(path))
def fstat(self, fp): def fstat(self, fp):
return util.fstat(fp) return util.fstat(fp)
...@@ -134,8 +143,8 @@ ...@@ -134,8 +143,8 @@
return os.path.exists(self.join(path)) return os.path.exists(self.join(path))
def fstat(self, fp): def fstat(self, fp):
return util.fstat(fp) return util.fstat(fp)
def isdir(self, path=None): def isdir(self, path: Optional[bytes] = None) -> bool:
return os.path.isdir(self.join(path)) return os.path.isdir(self.join(path))
...@@ -140,5 +149,5 @@ ...@@ -140,5 +149,5 @@
return os.path.isdir(self.join(path)) return os.path.isdir(self.join(path))
def isfile(self, path=None): def isfile(self, path: Optional[bytes] = None) -> bool:
return os.path.isfile(self.join(path)) return os.path.isfile(self.join(path))
...@@ -143,5 +152,5 @@ ...@@ -143,5 +152,5 @@
return os.path.isfile(self.join(path)) return os.path.isfile(self.join(path))
def islink(self, path=None): def islink(self, path: Optional[bytes] = None) -> bool:
return os.path.islink(self.join(path)) return os.path.islink(self.join(path))
...@@ -146,6 +155,6 @@ ...@@ -146,6 +155,6 @@
return os.path.islink(self.join(path)) return os.path.islink(self.join(path))
def isfileorlink(self, path=None): def isfileorlink(self, path: Optional[bytes] = None) -> bool:
"""return whether path is a regular file or a symlink """return whether path is a regular file or a symlink
Unlike isfile, this doesn't follow symlinks.""" Unlike isfile, this doesn't follow symlinks."""
...@@ -156,7 +165,7 @@ ...@@ -156,7 +165,7 @@
mode = st.st_mode mode = st.st_mode
return stat.S_ISREG(mode) or stat.S_ISLNK(mode) return stat.S_ISREG(mode) or stat.S_ISLNK(mode)
def _join(self, *paths): def _join(self, *paths: bytes) -> bytes:
root_idx = 0 root_idx = 0
for idx, p in enumerate(paths): for idx, p in enumerate(paths):
if os.path.isabs(p) or p.startswith(self._dir_sep): if os.path.isabs(p) or p.startswith(self._dir_sep):
...@@ -166,10 +175,10 @@ ...@@ -166,10 +175,10 @@
paths = [p for p in paths if p] paths = [p for p in paths if p]
return self._dir_sep.join(paths) return self._dir_sep.join(paths)
def reljoin(self, *paths): def reljoin(self, *paths: bytes) -> bytes:
"""join various elements of a path together (as os.path.join would do) """join various elements of a path together (as os.path.join would do)
The vfs base is not injected so that path stay relative. This exists The vfs base is not injected so that path stay relative. This exists
to allow handling of strange encoding if needed.""" to allow handling of strange encoding if needed."""
return self._join(*paths) return self._join(*paths)
...@@ -170,12 +179,12 @@ ...@@ -170,12 +179,12 @@
"""join various elements of a path together (as os.path.join would do) """join various elements of a path together (as os.path.join would do)
The vfs base is not injected so that path stay relative. This exists The vfs base is not injected so that path stay relative. This exists
to allow handling of strange encoding if needed.""" to allow handling of strange encoding if needed."""
return self._join(*paths) return self._join(*paths)
def split(self, path): def split(self, path: bytes):
"""split top-most element of a path (as os.path.split would do) """split top-most element of a path (as os.path.split would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.split(path) return os.path.split(path)
...@@ -177,8 +186,8 @@ ...@@ -177,8 +186,8 @@
"""split top-most element of a path (as os.path.split would do) """split top-most element of a path (as os.path.split would do)
This exists to allow handling of strange encoding if needed.""" This exists to allow handling of strange encoding if needed."""
return os.path.split(path) return os.path.split(path)
def lexists(self, path=None): def lexists(self, path: Optional[bytes] = None) -> bool:
return os.path.lexists(self.join(path)) return os.path.lexists(self.join(path))
...@@ -183,5 +192,5 @@ ...@@ -183,5 +192,5 @@
return os.path.lexists(self.join(path)) return os.path.lexists(self.join(path))
def lstat(self, path=None): def lstat(self, path: Optional[bytes] = None):
return os.lstat(self.join(path)) return os.lstat(self.join(path))
...@@ -186,5 +195,5 @@ ...@@ -186,5 +195,5 @@
return os.lstat(self.join(path)) return os.lstat(self.join(path))
def listdir(self, path=None): def listdir(self, path: Optional[bytes] = None):
return os.listdir(self.join(path)) return os.listdir(self.join(path))
...@@ -189,5 +198,5 @@ ...@@ -189,5 +198,5 @@
return os.listdir(self.join(path)) return os.listdir(self.join(path))
def makedir(self, path=None, notindexed=True): def makedir(self, path: Optional[bytes] = None, notindexed=True):
return util.makedir(self.join(path), notindexed) return util.makedir(self.join(path), notindexed)
...@@ -192,5 +201,7 @@ ...@@ -192,5 +201,7 @@
return util.makedir(self.join(path), notindexed) return util.makedir(self.join(path), notindexed)
def makedirs(self, path=None, mode=None): def makedirs(
self, path: Optional[bytes] = None, mode: Optional[int] = None
):
return util.makedirs(self.join(path), mode) return util.makedirs(self.join(path), mode)
...@@ -195,5 +206,5 @@ ...@@ -195,5 +206,5 @@
return util.makedirs(self.join(path), mode) return util.makedirs(self.join(path), mode)
def makelock(self, info, path): def makelock(self, info, path: bytes):
return util.makelock(info, self.join(path)) return util.makelock(info, self.join(path))
...@@ -198,5 +209,5 @@ ...@@ -198,5 +209,5 @@
return util.makelock(info, self.join(path)) return util.makelock(info, self.join(path))
def mkdir(self, path=None): def mkdir(self, path: Optional[bytes] = None):
return os.mkdir(self.join(path)) return os.mkdir(self.join(path))
...@@ -201,6 +212,11 @@ ...@@ -201,6 +212,11 @@
return os.mkdir(self.join(path)) return os.mkdir(self.join(path))
def mkstemp(self, suffix=b'', prefix=b'tmp', dir=None): def mkstemp(
self,
suffix: bytes = b'',
prefix: bytes = b'tmp',
dir: Optional[bytes] = None,
):
fd, name = pycompat.mkstemp( fd, name = pycompat.mkstemp(
suffix=suffix, prefix=prefix, dir=self.join(dir) suffix=suffix, prefix=prefix, dir=self.join(dir)
) )
...@@ -210,6 +226,6 @@ ...@@ -210,6 +226,6 @@
else: else:
return fd, fname return fd, fname
def readdir(self, path=None, stat=None, skip=None): def readdir(self, path: Optional[bytes] = None, stat=None, skip=None):
return util.listdir(self.join(path), stat, skip) return util.listdir(self.join(path), stat, skip)
...@@ -214,5 +230,5 @@ ...@@ -214,5 +230,5 @@
return util.listdir(self.join(path), stat, skip) return util.listdir(self.join(path), stat, skip)
def readlock(self, path): def readlock(self, path: bytes) -> bytes:
return util.readlock(self.join(path)) return util.readlock(self.join(path))
...@@ -217,6 +233,6 @@ ...@@ -217,6 +233,6 @@
return util.readlock(self.join(path)) return util.readlock(self.join(path))
def rename(self, src, dst, checkambig=False): def rename(self, src: bytes, dst: bytes, checkambig=False):
"""Rename from src to dst """Rename from src to dst
checkambig argument is used with util.filestat, and is useful checkambig argument is used with util.filestat, and is useful
...@@ -238,6 +254,6 @@ ...@@ -238,6 +254,6 @@
return ret return ret
return util.rename(srcpath, dstpath) return util.rename(srcpath, dstpath)
def readlink(self, path): def readlink(self, path: bytes) -> bytes:
return util.readlink(self.join(path)) return util.readlink(self.join(path))
...@@ -242,6 +258,6 @@ ...@@ -242,6 +258,6 @@
return util.readlink(self.join(path)) return util.readlink(self.join(path))
def removedirs(self, path=None): def removedirs(self, path: Optional[bytes] = None):
"""Remove a leaf directory and all empty intermediate ones""" """Remove a leaf directory and all empty intermediate ones"""
return util.removedirs(self.join(path)) return util.removedirs(self.join(path))
...@@ -245,7 +261,7 @@ ...@@ -245,7 +261,7 @@
"""Remove a leaf directory and all empty intermediate ones""" """Remove a leaf directory and all empty intermediate ones"""
return util.removedirs(self.join(path)) return util.removedirs(self.join(path))
def rmdir(self, path=None): def rmdir(self, path: Optional[bytes] = None):
"""Remove an empty directory.""" """Remove an empty directory."""
return os.rmdir(self.join(path)) return os.rmdir(self.join(path))
...@@ -249,7 +265,9 @@ ...@@ -249,7 +265,9 @@
"""Remove an empty directory.""" """Remove an empty directory."""
return os.rmdir(self.join(path)) return os.rmdir(self.join(path))
def rmtree(self, path=None, ignore_errors=False, forcibly=False): def rmtree(
self, path: Optional[bytes] = None, ignore_errors=False, forcibly=False
):
"""Remove a directory tree recursively """Remove a directory tree recursively
If ``forcibly``, this tries to remove READ-ONLY files, too. If ``forcibly``, this tries to remove READ-ONLY files, too.
...@@ -272,6 +290,6 @@ ...@@ -272,6 +290,6 @@
self.join(path), ignore_errors=ignore_errors, onerror=onerror self.join(path), ignore_errors=ignore_errors, onerror=onerror
) )
def setflags(self, path, l, x): def setflags(self, path: bytes, l: bool, x: bool):
return util.setflags(self.join(path), l, x) return util.setflags(self.join(path), l, x)
...@@ -276,5 +294,5 @@ ...@@ -276,5 +294,5 @@
return util.setflags(self.join(path), l, x) return util.setflags(self.join(path), l, x)
def stat(self, path=None): def stat(self, path: Optional[bytes] = None):
return os.stat(self.join(path)) return os.stat(self.join(path))
...@@ -279,5 +297,5 @@ ...@@ -279,5 +297,5 @@
return os.stat(self.join(path)) return os.stat(self.join(path))
def unlink(self, path=None): def unlink(self, path: Optional[bytes] = None):
return util.unlink(self.join(path)) return util.unlink(self.join(path))
...@@ -282,6 +300,6 @@ ...@@ -282,6 +300,6 @@
return util.unlink(self.join(path)) return util.unlink(self.join(path))
def tryunlink(self, path=None): def tryunlink(self, path: Optional[bytes] = None):
"""Attempt to remove a file, ignoring missing file errors.""" """Attempt to remove a file, ignoring missing file errors."""
util.tryunlink(self.join(path)) util.tryunlink(self.join(path))
...@@ -285,8 +303,10 @@ ...@@ -285,8 +303,10 @@
"""Attempt to remove a file, ignoring missing file errors.""" """Attempt to remove a file, ignoring missing file errors."""
util.tryunlink(self.join(path)) util.tryunlink(self.join(path))
def unlinkpath(self, path=None, ignoremissing=False, rmdir=True): def unlinkpath(
self, path: Optional[bytes] = None, ignoremissing=False, rmdir=True
):
return util.unlinkpath( return util.unlinkpath(
self.join(path), ignoremissing=ignoremissing, rmdir=rmdir self.join(path), ignoremissing=ignoremissing, rmdir=rmdir
) )
...@@ -289,7 +309,7 @@ ...@@ -289,7 +309,7 @@
return util.unlinkpath( return util.unlinkpath(
self.join(path), ignoremissing=ignoremissing, rmdir=rmdir self.join(path), ignoremissing=ignoremissing, rmdir=rmdir
) )
def utime(self, path=None, t=None): def utime(self, path: Optional[bytes] = None, t=None):
return os.utime(self.join(path), t) return os.utime(self.join(path), t)
...@@ -294,6 +314,6 @@ ...@@ -294,6 +314,6 @@
return os.utime(self.join(path), t) return os.utime(self.join(path), t)
def walk(self, path=None, onerror=None): def walk(self, path: Optional[bytes] = None, onerror=None):
"""Yield (dirpath, dirs, files) tuple for each directories under path """Yield (dirpath, dirs, files) tuple for each directories under path
``dirpath`` is relative one from the root of this vfs. This ``dirpath`` is relative one from the root of this vfs. This
...@@ -360,7 +380,7 @@ ...@@ -360,7 +380,7 @@
def __init__( def __init__(
self, self,
base, base: bytes,
audit=True, audit=True,
cacheaudited=False, cacheaudited=False,
expandpath=False, expandpath=False,
...@@ -381,7 +401,7 @@ ...@@ -381,7 +401,7 @@
self.options = {} self.options = {}
@util.propertycache @util.propertycache
def _cansymlink(self): def _cansymlink(self) -> bool:
return util.checklink(self.base) return util.checklink(self.base)
@util.propertycache @util.propertycache
...@@ -393,7 +413,7 @@ ...@@ -393,7 +413,7 @@
return return
os.chmod(name, self.createmode & 0o666) os.chmod(name, self.createmode & 0o666)
def _auditpath(self, path, mode): def _auditpath(self, path, mode) -> None:
if self._audit: if self._audit:
if os.path.isabs(path) and path.startswith(self.base): if os.path.isabs(path) and path.startswith(self.base):
path = os.path.relpath(path, self.base) path = os.path.relpath(path, self.base)
...@@ -404,8 +424,8 @@ ...@@ -404,8 +424,8 @@
def __call__( def __call__(
self, self,
path, path: bytes,
mode=b"r", mode: bytes = b"rb",
atomictemp=False, atomictemp=False,
notindexed=False, notindexed=False,
backgroundclose=False, backgroundclose=False,
...@@ -518,7 +538,7 @@ ...@@ -518,7 +538,7 @@
return fp return fp
def symlink(self, src, dst): def symlink(self, src: bytes, dst: bytes) -> None:
self.audit(dst) self.audit(dst)
linkname = self.join(dst) linkname = self.join(dst)
util.tryunlink(linkname) util.tryunlink(linkname)
...@@ -538,7 +558,7 @@ ...@@ -538,7 +558,7 @@
else: else:
self.write(dst, src) self.write(dst, src)
def join(self, path, *insidef): def join(self, path: Optional[bytes], *insidef: bytes) -> bytes:
if path: if path:
parts = [self.base, path] parts = [self.base, path]
parts.extend(insidef) parts.extend(insidef)
...@@ -551,7 +571,7 @@ ...@@ -551,7 +571,7 @@
class proxyvfs(abstractvfs): class proxyvfs(abstractvfs):
def __init__(self, vfs): def __init__(self, vfs: "vfs"):
self.vfs = vfs self.vfs = vfs
def _auditpath(self, path, mode): def _auditpath(self, path, mode):
...@@ -569,7 +589,7 @@ ...@@ -569,7 +589,7 @@
class filtervfs(proxyvfs, abstractvfs): class filtervfs(proxyvfs, abstractvfs):
'''Wrapper vfs for filtering filenames with a function.''' '''Wrapper vfs for filtering filenames with a function.'''
def __init__(self, vfs, filter): def __init__(self, vfs: "vfs", filter):
proxyvfs.__init__(self, vfs) proxyvfs.__init__(self, vfs)
self._filter = filter self._filter = filter
...@@ -573,6 +593,6 @@ ...@@ -573,6 +593,6 @@
proxyvfs.__init__(self, vfs) proxyvfs.__init__(self, vfs)
self._filter = filter self._filter = filter
def __call__(self, path, *args, **kwargs): def __call__(self, path: bytes, *args, **kwargs):
return self.vfs(self._filter(path), *args, **kwargs) return self.vfs(self._filter(path), *args, **kwargs)
...@@ -577,6 +597,6 @@ ...@@ -577,6 +597,6 @@
return self.vfs(self._filter(path), *args, **kwargs) return self.vfs(self._filter(path), *args, **kwargs)
def join(self, path, *insidef): def join(self, path: Optional[bytes], *insidef: bytes) -> bytes:
if path: if path:
return self.vfs.join(self._filter(self.vfs.reljoin(path, *insidef))) return self.vfs.join(self._filter(self.vfs.reljoin(path, *insidef)))
else: else:
...@@ -589,6 +609,6 @@ ...@@ -589,6 +609,6 @@
class readonlyvfs(proxyvfs): class readonlyvfs(proxyvfs):
'''Wrapper vfs preventing any writing.''' '''Wrapper vfs preventing any writing.'''
def __init__(self, vfs): def __init__(self, vfs: "vfs"):
proxyvfs.__init__(self, vfs) proxyvfs.__init__(self, vfs)
...@@ -593,7 +613,7 @@ ...@@ -593,7 +613,7 @@
proxyvfs.__init__(self, vfs) proxyvfs.__init__(self, vfs)
def __call__(self, path, mode=b'r', *args, **kw): def __call__(self, path: bytes, mode: bytes = b'rb', *args, **kw):
if mode not in (b'r', b'rb'): if mode not in (b'r', b'rb'):
raise error.Abort(_(b'this vfs is read only')) raise error.Abort(_(b'this vfs is read only'))
return self.vfs(path, mode, *args, **kw) return self.vfs(path, mode, *args, **kw)
...@@ -596,8 +616,8 @@ ...@@ -596,8 +616,8 @@
if mode not in (b'r', b'rb'): if mode not in (b'r', b'rb'):
raise error.Abort(_(b'this vfs is read only')) raise error.Abort(_(b'this vfs is read only'))
return self.vfs(path, mode, *args, **kw) return self.vfs(path, mode, *args, **kw)
def join(self, path, *insidef): def join(self, path: Optional[bytes], *insidef: bytes) -> bytes:
return self.vfs.join(path, *insidef) return self.vfs.join(path, *insidef)
......
...@@ -581,7 +581,13 @@ ...@@ -581,7 +581,13 @@
If uid is None, return the name of the current user.""" If uid is None, return the name of the current user."""
if not uid: if not uid:
return pycompat.fsencode(getpass.getuser()) try:
return pycompat.fsencode(getpass.getuser())
except ModuleNotFoundError:
# getpass.getuser() checks for a few environment variables first,
# but if those aren't set, imports pwd and calls getpwuid(), none of
# which exists on Windows.
pass
return None return None
......
...@@ -468,6 +468,7 @@ ...@@ -468,6 +468,7 @@
"log", "log",
"memmap2", "memmap2",
"micro-timer", "micro-timer",
"once_cell",
"ouroboros", "ouroboros",
"pretty_assertions", "pretty_assertions",
"rand 0.8.5", "rand 0.8.5",
...@@ -687,6 +688,12 @@ ...@@ -687,6 +688,12 @@
] ]
[[package]] [[package]]
name = "once_cell"
version = "1.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f7254b99e31cad77da24b08ebf628882739a608578bb1bcdfc1f9c21260d7c0"
[[package]]
name = "opaque-debug" name = "opaque-debug"
version = "0.3.0" version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
......
...@@ -35,6 +35,9 @@ ...@@ -35,6 +35,9 @@
memmap2 = { version = "0.5.3", features = ["stable_deref_trait"] } memmap2 = { version = "0.5.3", features = ["stable_deref_trait"] }
zstd = "0.5.3" zstd = "0.5.3"
format-bytes = "0.3.0" format-bytes = "0.3.0"
# once_cell 1.15 uses edition 2021, while the heptapod CI
# uses an old version of Cargo that doesn't support it.
once_cell = "=1.14.0"
# We don't use the `miniz-oxide` backend to not change rhg benchmarks and until # We don't use the `miniz-oxide` backend to not change rhg benchmarks and until
# we have a clearer view of which backend is the fastest. # we have a clearer view of which backend is the fastest.
......