Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode


For running tests in the test suite you have two options. You may go the easy way and just run the test suite in docker. But for some tasks you could also need to install the test suite natively, which requires a little bit more setup since PHP and some dependencies need to be installed.

Both ways to run tests with the test suites are described here.

Testing with test suite in docker

Let’s see what is available. Invoke the following command from within the root of the oCIS repository.

make -C tests/acceptance/docker help

Basically we have two sources for feature tests and test suites:

At the moment both can be applied to oCIS since the api of oCIS is designed to be compatible to ownCloud.

Since we have to offer an migration path to existing users of ownCloud, you can use your existing ownCloud as storage backend for oCIS. As another storage backend we offer oCIS native storage, also called “oCIS”. This stores files directly on disk. Which storage backend is used is also reflected in the tests, there are always different tests for oCIS storage and ownCloud storage.

You can invoke two types of test suite runs:

  • run a full test suite, which consists of multiple feature tests
  • run a single feature test

Run full test suite

The names of the full test suite make targets have the same naming as in the CI pipeline.

For example make -C tests/acceptance/docker localApiTests-apiAccountsHashDifficulty-ocis runs the same tests as the localApiTests-apiAccountsHashDifficulty-ocis CI pipeline, which runs the oCIS test suite “apiAccountsHashDifficulty” against an oCIS with oCIS storage.

For example make -C tests/acceptance/docker Core-API-Tests-owncloud-storage-3runs the same tests as the Core-API-Tests-owncloud-storage-3 CI pipeline, which runs the third (out of ten) ownCloud test suite against an oCIS with owncloud storage.

Run single feature test

The single feature tests can also be run against the different storage backends. Therefore multiple make targets with the schema test--feature- exists. For selecting a single feature test you have to add an additional BEHAT_FEATURE=... parameter when invoking the make command:

make -C tests/acceptance/docker test-ocis-feature-ocis-storage BEHAT_FEATURE='tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature'

This must be pointing to a valid feature definition.

oCIS image to be tested (or: skip build and take existing image)

By default the tests will be run against docker image built from your current working state of the oCIS repository. For some purposes it might also be handy to use a oCIS image from Docker Hub. Therefore you can provide the optional flag OCIS_IMAGE_TAG=... which must contain an available docker tag of the owncloud/ocis registry on Docker Hub (eg. ‘latest’).

make -C tests/acceptance/docker localApiTests-apiAccountsHashDifficulty-ocis OCIS_IMAGE_TAG=latest

Test log output

While a test is running or when it is finished, you can attach to the logs generated by the tests.

make -C tests/acceptance/docker show-test-logs
The log output is opened in less. You can navigate up and down with your cursors. By pressing “F” you can follow the latest line of the output.


During testing we start an redis and oCIS docker container. These will not be stopped automatically. You can stop them with:

make -C tests/acceptance/docker clean

Testing with test suite natively installed

We are using the ownCloud 10 acceptance test suite against oCIS.

Getting the tests

All you need to do to get the acceptance tests is check out the core repo:

git clone https://github.com/owncloud/core.git

Run ocis

To start ocis:


PROXY_ENABLE_BASIC_AUTH will allow the acceptance tests to make requests against the provisioning api (and other endpoints) using basic auth.

Run the acceptance tests

First we will need to clone the testing app in owncloud which contains the skeleton files required for running the tests. In the ownCloud 10 core clone the testing app with the following command:

git clone https://github.com/owncloud/testing apps/testing

Then run the api acceptance tests with the following command from the root of the ownCloud 10 core repository:

make test-acceptance-api \
TEST_SERVER_URL=https://localhost:9200 \
TEST_OCIS=true \
SKELETON_DIR=apps/testing/data/apiSkeleton \

Make sure to adjust the settings TEST_SERVER_URL and OCIS_REVA_DATA_ROOT according to your environment.

This will run all tests that are relevant to oCIS.

To run a single test add BEHAT_FEATURE=<feature file>

To run tests with a different storage driver set STORAGE_DRIVER to the correct value. It can be set to OCIS or OWNCLOUD and uses OWNCLOUD as the default value.

use existing tests for BDD

As a lot of scenarios are written for oC10, we can use those tests for Behaviour driven development in ocis. Every scenario that does not work in oCIS with “owncloud” storage, is listed in tests/acceptance/expected-failures-on-OWNCLOUD-storage.md with a link to the related issue. Every scenario that does not work in oCIS with “ocis” storage, is listed in tests/acceptance/expected-failures-on-OCIS-storage.md with a link to the related issue.

Those scenarios are run in the ordinary acceptance test pipeline in CI. The scenarios that fail are checked against the expected failures. If there are any differences then the CI pipeline fails. Similarly, scenarios that do not work in oCIS with EOS storage are listed in tests/acceptance/expected-failures-on-EOS-storage.md.

If you want to work on a specific issue

  1. adjust the core commit id to the latest commit in core so that CI will run the latest test code and scenarios from core. For that change CORE_COMMITID in .drone.env:

     # The test runner source for API tests
  2. locally run each of the tests marked with that issue in the expected failures file.


    make test-acceptance-api \
    TEST_SERVER_URL=https://localhost:9200 \
    TEST_OCIS=true \
  3. the tests will fail, try to understand how and why they are failing

  4. fix the code

  5. go back to 2. and repeat till the tests are passing.

  6. remove those tests from the expected failures file

  7. make a PR that has the fixed code, and the relevant lines removed from the expected failures file.