Commit 3b35e689 authored by Georges Racinet's avatar Georges Racinet 🦑
Browse files

Merged stable into default after publication of Production testing

This new mode of operation was developped on the stable branch of
the tests, because we want to use it on instances running the
current stable Heptapod series (0.19).

The important refactor of the `heptapod.users` values was done
as part of the effort.

Obviously we also want all of this in the default branch.

Took care of keeping the target image correct in CI configuration.
Pipeline #18151 passed with stage
in 22 minutes and 40 seconds
# Heptapod automated functional tests
WARNING: don't try and run these if you're not willing to sacrifice the server
data.
This can be used for preflight validation of production servers, provided the
data is fully wiped afterwards.
WARNING: to test production instances, use the dedicated mode exclusively. Other
modes assume that you are ready to **throw all data away** after the test
run, and hence are suitable at most for preflight full testing of a fresh
production instance.
## Installation
......@@ -13,8 +12,8 @@ data is fully wiped afterwards.
- tox: `pip install --user tox`
- [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home):
+ Fedora 29: `dnf install chromedriver`
+ Debian 9: `apt install chromium-driver`
+ Fedora 29 to 33: `dnf install chromedriver`
+ Debian 9 to 10: `apt install chromium-driver`
All further dependencies will be installed by the first run.
......@@ -25,13 +24,17 @@ These tests can work against Heptapod servers provided either as
- (default) local Docker containers manageable by the system user running the
tests, or
- installed from source and being run by the same user as the tests, or
- completely remote, skipping some of the tests.
- completely remote, skipping some of the tests, or
- production, relying on users with at most ownership of a dedicated projects
group, and running a subset of suitable tests.
The Gitlab root password will be initialized by the first test to run.
Except in production server mode, the Gitlab root password
will be initialized by the first test to run.
The tests will fail if the Gitlab root password is already set
and does not match the expected one.
### Default Docker setup
In the Docker case, the expected container name is by default `heptapod`.
......@@ -143,6 +146,42 @@ about all network options:
The root password option is listed because you probably don't want to have
an instance with the default root password available on the internet.
### Testing a production instance
*New on 2021-02-18*: see !80
To run the tests suitable for production instances, you will need first to
prepare:
- a projects group entirely dedicated to these functional tests
- a dedicated user that owns the dedicated group (more users will probably be
needed in the future).
The production mode is activated by an explicit command-line option. Another
option is used to pass the dedicated user credentials.
Example:
```
~/heptapod/heptapod-tests $ tox -- --heptapod-prod-server \
--heptapod-prod-group-owner-credentials ID:USERNAME:PASSWORD \
--heptapod-url https://foss.heptapod.net \
--heptapod-ssh-port 22 \
--heptapod-ssh-user hg
```
where ID is the numeric user id, and USERNAME is the user login name
(e.g `testfonct`).
Remarks and safety:
- The user password must be fully operational: the functional tests won't
take care of the finalization sequence that occurs at first login.
- Do not give the dedicated user any rights outside of the dedicated groups.
- It is advisable to block the dedicated user when not in use.
- Be prepared to receive email for Merge Requests on the dedicated user address.
Arguably, this is part of the testing.
### Docker: choosing the version to test
......
......@@ -15,6 +15,7 @@ from tests.utils import git
from tests.utils import hg
from tests.utils.heptapod import (
Heptapod,
ProductionHeptapod,
OmnibusHeptapod,
DockerHeptapod,
SourceHeptapod,
......@@ -44,6 +45,28 @@ def pytest_addoption(parser):
parser.addoption('--heptapod-omnibus', action="store_true",
help="Test a Heptapod Omnibus by running the tests "
"as root on the same system ")
parser.addoption('--heptapod-prod-server', action='store_true',
help="Run only those tests that are suitable "
"for a production installation, with precreated "
"users instead of taking with the root account. "
"Implies --heptapod-remote and requires more parameters.")
group = parser.getgroup(
'Heptapod production servers',
description="Required and optional settings to run functional "
"tests against production servers")
group.addoption('--heptapod-prod-group-owner-credentials',
help="prod servers only: numeric id, username and "
"password for owner of the group where most of the "
"testing will happen. "
"Provided as colon separated fields, such as "
"123:ftest_owner:s1cre3t.")
group.addoption('--heptapod-prod-group-id', type=int,
help="prod servers only: numeric id for the group "
"where most projects and subgroups are to be created. "
"If not specified, a random group with minimal depth "
"among those owned by the user given in "
"--heptapod-prod-group-owner-credentials will be used."
"Some tests may still use personal namespaces.")
parser.addoption('--heptapod-remote', action='store_true',
help="Test a remote server. This means that no "
"direct command nor file system access is possible "
......@@ -53,7 +76,7 @@ def pytest_addoption(parser):
help="Have all Mercurial projects created as 'native' "
"(HGitaly backed)")
parser.addoption('--heptapod-repositories-root',
help="Root of the group/repository hierarchy. "
help="Root directory of the repository storage. "
"This is mandatory for source installs and ignored "
"in Docker mode.")
parser.addoption('--heptapod-gdk-root',
......@@ -120,6 +143,10 @@ def pytest_configure(config):
"hg_native: mark test as running only if "
"session is in hg-native mode"
)
config.addinivalue_line("markers",
"prod_server: mark test as suitable "
"on production servers."
)
def pytest_collection_modifyitems(config, items):
......@@ -133,9 +160,12 @@ def pytest_collection_modifyitems(config, items):
reason="needs to manage Heptapod services")
skip_hg_git = pytest.mark.skip(reason="needs non hg-native mode")
skip_hg_native = pytest.mark.skip(reason="needs hg-native mode")
skip_prod_server = pytest.mark.skip(
reason="not meant for production servers ")
no_reverse_call = not(config.getoption('heptapod_reverse_call_host'))
remote = config.getoption('heptapod_remote')
prod_server = config.getoption('heptapod_prod_server')
remote = config.getoption('heptapod_remote') or prod_server
source_install = config.getoption('heptapod_source_install')
gdk = config.getoption('heptapod_gdk')
omnibus = config.getoption('heptapod_omnibus')
......@@ -155,6 +185,8 @@ def pytest_collection_modifyitems(config, items):
item.add_marker(skip_hg_git)
if "hg_native" in item.keywords and not hg_native:
item.add_marker(skip_hg_native)
if prod_server and "prod_server" not in item.keywords:
item.add_marker(skip_prod_server)
heptapod_instance = None
......@@ -199,7 +231,22 @@ def heptapod(pytestconfig):
try:
if active_threads == 1: # we're the first
if pytestconfig.getoption('heptapod_source_install'):
if pytestconfig.getoption('heptapod_prod_server'):
owner_creds = pytestconfig.getoption(
'heptapod_prod_group_owner_credentials')
if owner_creds is None:
raise RuntimeError(
"Production server mode: please specify a Group owner "
"with the --heptapod-prod-group-owner-credentials "
"option (see --help for syntax)")
heptapod = ProductionHeptapod(
group_owner_credentials=owner_creds,
default_group_id=pytestconfig.getoption(
'heptapod_prod_group_id'),
**common
)
elif pytestconfig.getoption('heptapod_source_install'):
repos_root = pytestconfig.getoption(
'heptapod_repositories_root')
heptapod = SourceHeptapod(repositories_root=repos_root,
......@@ -233,7 +280,11 @@ def heptapod(pytestconfig):
@contextlib.contextmanager
def project_fixture(heptapod, name_prefix, owner, group=None, **opts):
def project_fixture(heptapod, name_prefix, owner=None, group=None, **opts):
if owner is None:
owner = heptapod.default_user_name
if group is None:
group = heptapod.default_group
name = '%s_%s' % (name_prefix, str(time.time()).replace('.', '_'))
with Project.api_create(heptapod, owner, name,
group=group, **opts) as proj:
......@@ -264,12 +315,13 @@ def additional_user(heptapod, accepts_concurrent):
@pytest.fixture()
def test_project(heptapod, accepts_concurrent):
with project_fixture(heptapod, 'test_project', 'root') as proj:
with project_fixture(heptapod, 'test_project') as proj:
yield proj
@pytest.fixture()
def test_project_with_runner(test_project):
test_project.only_specific_runners()
with Runner.api_register(test_project, unique_name('fixt')) as runner:
yield test_project, runner
......
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAzpz0B9zsrgSk9iH5F2f1iZhaG0jE1AnVEmv/zyAmxbHOzrgmqvH/
D4Ce1SV3/xUYhwN5NM2ONcCfVbwmPoLyOuwB68jl4b1H99rq3/NQq0J+4OUieKCT2PjTj7
QyGnBpREyLwCeB2A6YdZWwn2HslFxU3y3WLvRorc9yGU+VpeaDkdWjHtdXtSuFGkVrVhDz
u4J/XRR6JbNBYwyaMuYdmt+jPHUUTF9JYiKB6oP5j9o4c7ErtFhDvOTqPy0m2BZyDEoluP
OZnDOapE00r+LvofOcQtu4+PNPLTX1xIhoNzpgJ1CKM2bE+K9jKocebSRwTARzK6hDAml2
+Z9C/pkdCS9zGN7P0NU2VfVGcUN4dnbr8EA/E2K7HBdDyE8FQuUChiRaWicMkXU9P6dOfc
8M9CaWw8Dm0Q32lUSI41dmxki/BldpqB4NtvbKuIYxk4NxfC97LFOu8unUxhwWlf+WPTdR
4pGb7CgTW8UBBSjPx1n5RYnp4CZg8QC76zVxKTQdAAAFmLVvd3O1b3dzAAAAB3NzaC1yc2
EAAAGBAM6c9Afc7K4EpPYh+Rdn9YmYWhtIxNQJ1RJr/88gJsWxzs64Jqrx/w+AntUld/8V
GIcDeTTNjjXAn1W8Jj6C8jrsAevI5eG9R/fa6t/zUKtCfuDlInigk9j404+0MhpwaURMi8
AngdgOmHWVsJ9h7JRcVN8t1i70aK3PchlPlaXmg5HVox7XV7UrhRpFa1YQ87uCf10UeiWz
QWMMmjLmHZrfozx1FExfSWIigeqD+Y/aOHOxK7RYQ7zk6j8tJtgWcgxKJbjzmZwzmqRNNK
/i76HznELbuPjzTy019cSIaDc6YCdQijNmxPivYyqHHm0kcEwEcyuoQwJpdvmfQv6ZHQkv
cxjez9DVNlX1RnFDeHZ26/BAPxNiuxwXQ8hPBULlAoYkWlonDJF1PT+nTn3PDPQmlsPA5t
EN9pVEiONXZsZIvwZXaageDbb2yriGMZODcXwveyxTrvLp1MYcFpX/lj03UeKRm+woE1vF
AQUoz8dZ+UWJ6eAmYPEAu+s1cSk0HQAAAAMBAAEAAAGBAIX1Nzcd4wpUkovOrQyi54yFje
5guNTtZwthoGKvatQEm5xlwxRUgFWRw+lYOLvW9qca9mvo1ko9kFDrAzTGe7z+JaS1BW7d
5RprApueyu+u1kqD5VymaBVmBu0GHPINbgLNSlKUitgFZo3eNryvpc7vKkvlERgyeOgwNr
74XYNJuIZGKFOntQMOq2bEGXqc1Rn+2wsDasqktUE9+4ACuLEgTFq3Yii+IvsQeoENfjHp
f25rMXXlQE3pcYLyiFvT+IV4NfbVKVh/tvc6nXZX2b4YQxCrOCU9jp2pzNr4W9WDpMFd9e
h/7p5G2i6XrTLqLIJs6shsXqSls/shRpYDnp1smjIKvnJGkPz1e6OhKhmSpRJUshebX4dk
+tuvZwg9iNkt8KiJ/VfbOoAmZt9MDzvhL9TDR3NYuwUg9L5DT+EFsuF41KMxPiIlIbPvie
VFNIpB3LYYQwisXiIwnE0qPGLIDUbLGJhai+rOiwJ5xXW9VljhVdrmmdgtr58VEVltwQAA
AMEAjFrIuqdINZUmEV/D6+qcavzvHIJj0/VFfG23wUWoqQ0nlUmscnTgkXBs6SsSnaKQpf
hhCAb5BQ3ojXljGc9sVcDQITfHkEHc1b8wbbdGMIQDHlDuRAU4JYA6Jh0C7rTRkhVcAVq7
d6o7nESqo3J0b2UcJIBQslQvR7CAppITkj86E+SrcYFIFt2tCJ3KLVyWrFTdKEQNf5Y2yl
i6P4fsmtNrVmg3Yerxyt5xlMVXPiqyNQ7tJjZJaRde+3eJnmNQAAAAwQDrMpkWNzZZ2IAm
FKsUNKm/WBUveZI4oGDEokm5miGhpDbyWfgAkq0yQQqwg4l+qXWN8ZtlP7B3VJ4anK0eZB
q0lyR9SrgcyMrqo0FZqE5OsRwMhCBcCYqqeiswv2VrkHuEjBbzD0z9XhcGskM43VXzHuHG
22nGVgaaD9Kx4euVunWjSO46xxvq9G8iCja4o2sW1xxhtHTkQsvb+i28zYl0kE8KPifBwg
hxyFYYwF5ZfRocAQoGb1A8+edfc+4pGe0AAADBAODjIxARONEF3JtJeyxGMad65XXyj6DY
nEuL/L8QrCnOUpRD52FoP4aI/yBGfpgTj6XgG1K2llUxiJLTLASq/r7UO2ioPhyAH9jP9v
/gJntYJQnOlujEXqS2hnZf9TPl3PtfV4dYfdrUiv24ztF3mEo5Eeij7/F0lXvICwVpRitb
48cO2uJMxc62dtZuHKAK0gBa4HUl8lmzeJ+HYeTmAe7aYNU6HHEWN+YrpnP0kwzuUCHOh1
5ligOWScD+Mm188QAAACBncmFjaW5ldEBwdXJpdHkudG9tYmUucmFjaW5ldC5mcgE=
-----END OPENSSH PRIVATE KEY-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOnPQH3OyuBKT2IfkXZ/WJmFobSMTUCdUSa//PICbFsc7OuCaq8f8PgJ7VJXf/FRiHA3k0zY41wJ9VvCY+gvI67AHryOXhvUf32urf81CrQn7g5SJ4oJPY+NOPtDIacGlETIvAJ4HYDph1lbCfYeyUXFTfLdYu9Gitz3IZT5Wl5oOR1aMe11e1K4UaRWtWEPO7gn9dFHols0FjDJoy5h2a36M8dRRMX0liIoHqg/mP2jhzsSu0WEO85Oo/LSbYFnIMSiW485mcM5qkTTSv4u+h85xC27j4808tNfXEiGg3OmAnUIozZsT4r2Mqhx5tJHBMBHMrqEMCaXb5n0L+mR0JL3MY3s/Q1TZV9UZxQ3h2duvwQD8TYrscF0PITwVC5QKGJFpaJwyRdT0/p059zwz0JpbDwObRDfaVRIjjV2bGSL8GV2moHg229sq4hjGTg3F8L3ssU67y6dTGHBaV/5Y9N1HikZvsKBNbxQEFKM/HWflFiengJmDxALvrNXEpNB0= gracinet@purity.tombe.racinet.fr
......@@ -5,6 +5,7 @@ from selenium.webdriver.support.ui import WebDriverWait
from .utils import (
hg,
suitable,
wait_assert,
)
......@@ -443,6 +444,7 @@ def test_mergerequest_add_rebase_publish(test_project, tmpdir):
"Commit summaries don't contain %r" % additional_msg)
@suitable.prod_server
def test_mergerequest_rebase_push_then_publish(test_project, tmpdir):
"""A simpler scenario for Heptapod: pushing the rebase, then publishing.
......
......@@ -144,6 +144,17 @@ def test_rename(test_project, tmpdir):
# let's check what GitLab sees
assert proj.api_branch_titles() == {'branch/default': 'Commit 1'}
# Starting with GitLab 10.3, we have a redirection
assert 'renamed' not in orig_url # to be sure of what we are testing
resp = requests.get(orig_url, allow_redirects=False)
# response is actually 302, but that may change
assert resp.status_code in (301, 302, 303)
# Pull on the old URL is redirected
repo.hg('pull', orig_url)
log = repo.hg('log', '-T', '{desc}:{phase}\n')
assert log.splitlines() == ['Commit 1:public', 'Commit 0:public']
def test_403(test_project):
"""Test that unauthorized commands give rise to a 403.
......@@ -164,7 +175,7 @@ def test_cli_404(test_project):
"""
heptapod = test_project.heptapod
user = 'test_basic'
basic_user_creds = (user, heptapod.users[user]['password'])
basic_user_creds = (user, heptapod.users[user].password)
resp = requests.get(heptapod.url + '/no/such/project?cmd=capabilities',
auth=basic_user_creds)
......
......@@ -7,6 +7,7 @@ import requests
from .utils import (
assert_webdriver_not_error,
needs,
suitable,
)
from .utils.reverse_calls import HttpListener
from .utils.hg import LocalRepo
......@@ -32,6 +33,7 @@ def prepare_simple_repo(proj, repo_path):
return repo
@suitable.prod_server
def test_push_basic(test_project, tmpdir):
"""
Push two changesets, one being a draft, confirm their arrival and phases
......@@ -486,6 +488,7 @@ def test_push_tags_branch_heads(test_project, tmpdir):
assert tags['other-1.0']['commit']['title'] == "Commit 2"
@suitable.prod_server
def test_push_tag_ci_job(test_project_with_runner, tmpdir):
proj, runner = test_project_with_runner
repo_path = tmpdir.join('repo1')
......
from io import BytesIO
from zipfile import ZipFile
from .utils import suitable
from .utils.hg import LocalRepo
from .utils.runner import Runner
from selenium.webdriver.support.ui import WebDriverWait
def test_register_grab_job(test_project, tmpdir):
test_project.only_specific_runners()
runner = Runner.api_register(test_project, 'test_register')
# registration consistency
......@@ -56,8 +61,10 @@ def test_register_grab_job(test_project, tmpdir):
assert resp.status_code == 404
def test_pipeline_pages(test_project, tmpdir):
@suitable.prod_server
def test_pipeline_pages_artifacts(test_project, tmpdir):
with Runner.api_register(test_project, 'test_register') as runner:
test_project.only_specific_runners()
# push something and grab a job
repo_path = tmpdir.join('repo1')
repo = LocalRepo.init(repo_path)
......@@ -69,6 +76,18 @@ def test_pipeline_pages(test_project, tmpdir):
job = runner.wait_assert_one_job()
zip_path = tmpdir / 'artifacts.zip'
with ZipFile(zip_path, 'w') as zf:
zf.writestr('report.txt', b'artifact content')
with zip_path.open('rb') as zfobj:
runner.upload_artifact(job, zfobj)
artif_fname, artif_body = test_project.get_job_artifacts(job)
assert artif_fname == 'artifacts.zip'
with ZipFile(BytesIO(artif_body), 'r') as zf:
with zf.open('report.txt') as innerf:
assert innerf.read() == b'artifact content'
# find the pipeline id in job payload
job_vars = {var['key']: var['value'] for var in job['variables']}
pipeline_id = int(job_vars['CI_PIPELINE_ID'])
......
from .utils.git import LocalRepo as GitLocalRepo
from .utils.hg import LocalRepo
from .utils.project import ProjectAccess
from .utils import suitable
def make_repo(tmpdir, name):
......@@ -39,6 +40,7 @@ def assert_pushed_repo(project, tmpdir, clone_name='clone'):
}
@suitable.prod_server
def test_owner_push_pull(test_project, tmpdir):
repo = make_repo(tmpdir, 'repo1')
ssh_cmd, ssh_url = test_project.owner_ssh_params
......
......@@ -33,7 +33,7 @@ class Group:
if parent is not None:
data['parent_id'] = parent.id
headers = {'Private-Token': heptapod.users[user_name]['token']}
headers = {'Private-Token': heptapod.users[user_name].token}
resp = requests.post(heptapod.url + cls.api_uri,
headers=headers,
data=data)
......@@ -45,10 +45,17 @@ class Group:
return self.id == other.id
@classmethod
def api_retrieve(cls, heptapod, group_id):
def api_retrieve(cls, heptapod, group_id, owner_name=None):
"""Return a checked Group object for the given id.
:owner_name: if specified, registered as :attr:`owner_name` on the
returned object, and used for all API calls, including the check
performed by this method.
"""
grp = Group(heptapod=heptapod,
id=group_id,
full_path=None,
owner_name=owner_name,
path=None)
resp = grp.api_get()
assert resp.status_code == 200
......@@ -57,18 +64,21 @@ class Group:
grp.full_path = info['full_path']
return grp
def api_get(self):
def private_token(self):
"""Return a token strong enough for all operations.
Namely, using a token for an owner, if there's a known one or
an Administrator token
"""
user_name = self.owner_name if self.owner_name is not None else 'root'
headers = {'Private-Token':
self.heptapod.users[user_name]['token']}
return requests.get(self.api_url, headers=headers)
return {'Private-Token': self.heptapod.users[user_name].token}
def api_get(self):
return requests.get(self.api_url, headers=self.private_token())
def api_post(self, subpath='', **params):
user_name = self.owner_name if self.owner_name is not None else 'root'
headers = {'Private-Token':
self.heptapod.users[user_name]['token']}
return requests.post('/'.join((self.api_url, subpath)),
headers=headers,
headers=self.private_token(),
data=params)
def api_add_member(self, user, access_level):
......@@ -78,7 +88,7 @@ class Group:
@classmethod
def api_search(cls, heptapod, group_name, user_name='root'):
headers = {'Private-Token': heptapod.users[user_name]['token']}
headers = {'Private-Token': heptapod.users[user_name].token}
resp = requests.get(heptapod.url + cls.api_uri,
headers=headers,
params=dict(search=group_name))
......@@ -93,10 +103,7 @@ class Group:
return self.heptapod.url + self.api_uri + '/%d' % self.id
def api_delete(self):
user_name = self.owner_name if self.owner_name is not None else 'root'
headers = {'Private-Token':
self.heptapod.users[user_name]['token']}
resp = requests.delete(self.api_url, headers=headers)
resp = requests.delete(self.api_url, headers=self.private_token())
assert resp.status_code in (202, 204)
def fs_path(self):
......
......@@ -14,6 +14,7 @@ import tempfile
import time
from urllib.parse import urlparse
from tests.utils.group import Group
from tests.utils.user import User
from tests.utils import docker
from tests.utils import session
......@@ -67,6 +68,23 @@ class Heptapod:
chrome_driver_args = ()
default_user_name = 'root'
"""Default user for project, group creation etc."""
default_group = None
"""Group instance where to create projects in by default."""
instance_type = 'development'
"""The type of instance, meaning how these tests operate on it.
It is of course completely possible to treat a development instance
as if it were for production: that's what happens when developping the
production server tests, of course.
Treating a production instance as a development one is also technically
possible (strongly discouraged of course).
"""
def __init__(self, url, ssh_user, ssh_port,
hg_native=False,
reverse_call_host=None,
......@@ -107,11 +125,11 @@ class Heptapod:
@property
def root_token_headers(self):
return {'Private-Token': self.users['root']['token']}
return {'Private-Token': self.users['root'].token}
@property
def basic_user_token_headers(self):
return {'Private-Token': self.users['test_basic']['token']}
return {'Private-Token': self.users['test_basic'].token}
def run_shell(self, command, **kw):
exit_code, output = self.execute(command, **kw)
......@@ -126,22 +144,7 @@ class Heptapod:
def get_user(self, name):
"""Return a :class:`User` instance, or `None`."""
info = self.users.get(name)
if info is None:
return None
user_id = info.get('id')
password = info.get('password')
if user_id is None:
logger.info(
"Searching for known user %r because its id is unknown.",
name)
user = User.search(self, name)
user.password = password
user.store_in_heptapod()
return user
return User(heptapod=self, id=user_id, name=name, password=password)
return self.users.get(name)
def new_webdriver(self):
options = selenium.webdriver.ChromeOptions()
......@@ -157,8 +160,8 @@ class Heptapod:
return selenium.webdriver.Chrome(options=options)
def get_user_webdriver(self, user_name):
info = self.users[user_name]
driver = info.get('webdriver')
user = self.users[user_name]
driver = user.webdriver
if driver is not None:
return driver
......@@ -169,13 +172,12 @@ class Heptapod:
driver = self.new_webdriver()
# guaranteeing driver to be available for teardown
# as soon as created
info['webdriver'] = driver
session.login_as_root(driver, self, info['password'])
session.login_as_root(driver, self, user.password)
else:
# TODO should init webdriver from here and store it before login
# attempt as well
driver = session.make_webdriver(self, user_name, info['password'])
info['webdriver'] = driver
driver = session.make_webdriver(self, user_name, user.password)
user.webdriver = driver
return driver
def wait_startup(self, first_response_timeout=INITIAL_TIMEOUT,
......@@ -225,6 +227,12 @@ class Heptapod:
def load_instance_cache(self):
path = self.instance_cache_file()
def invalidate_retry():
logger.warning("Removing cache file %r and starting afresh.", path)
os.unlink(path)
return self.load_instance_cache()
try:
with open(path) as cachef:
cached = json.load(cachef)
......@@ -234,15 +242,44 @@ class Heptapod:
"Heptapod instance info will be retrieved "
"or initialized", path)
else:
instance_type = cached.get('instance_type')
if instance_type != self.instance_type:
# None means before the introduction of this invalidation
# in case anyone wonders (not worth a specific message)
logger.warning(
"Cache file %r is for another instance type (%r) ",
path, instance_type)
return invalidate_retry()
url = cached.get('url')
# for now all development instances have the same two
# fixed test users (root and test_basic). There is already
# a token invalidation logic.
if instance_type == 'production' and url != self.url:
# None means before the introduction of this invalidation
# in case anyone wonders (not worth a specific message)
logger.warning(
"Cache file %r is for another instance (%r) ",
path, url)
return invalidate_retry()
for name, info in cached['users'].items():
self.users[name] = dict(name=name,
if 'id' not in info:
logger.warning("Cache file %r is from an earlier version "
"of heptapod-tests. ", path)
return invalidate_retry()
self.users[name] = User(heptapod=self,
name=name,
id=info['id'],
token=info['token'])
def update_instance_cache(self):
users = {name: dict(token=info['token'])
for name, info in self.users.items()}
users = {user.name: dict(token=user.token, id=user.id)
for user in self.users.values()}
with open(self.instance_cache_file(), 'w') as cachef:
json.dump(dict(users=users), cachef)
json.dump(dict(url=self.url,
instance_type=self.instance_type,
users=users), cachef)
def prepare(self, root_password):
"""Make all preparations for the Heptapod instance to be testable.
......@@ -264,7 +301,11 @@ class Heptapod: