ownCloud
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode
Edit page

Spaces Provider

The current implementation in oCIS might not yet fully reflect this concept. Feel free to add links to ADRs, PRs and Issues in short warning boxes like this.

Spaces Provider

A storage provider manages resources identified by a reference by accessing a storage system with a storage driver.

oCIS spaces provider
[Software System]
oCIS spaces provider...
reva storage provider
[Component: golang]

hosts multiple storage spaces using a storage driver
reva storage provider...
reva gateway
[Component: golang]

API facade for internal reva services
reva gateway...
Storage System
[Software System]

provides persistent storage
Storage System...
Reads from and writes to
[POSIX, S3]
Reads from and writes to...
reva frontend
[Component: golang]

handles protocol translation
reva frontend...
oCIS proxy
[Component: golang]

Routes requests to oc10 or ecis
oCIS proxy...
Mints an internal JWT
and torwards requests to
[WebDAV, OCS, OCM, tus]
Mints an internal JWT...
Client
[Container: C++, Kotlin,
Swift or Vue]

A desktop, mobile or web Client
Client...
Reads from and writes to
[WebDAV, libregraph, CS3]
Reads from and writes to...
Reads from and writes to
[CS3, tus]
Reads from and writes to...
Forwards to
[CS3, storage registry]
Forwards to...

C4 Component diagram for an oCIS spaces provider

An oCIS spaces provider manages resources in storage spaces by persisting them with a specific storage driver in a storage system.

Date: 2021-07-22T12:40

C4 Component diagram for an oCIS spaces provider...
Viewer does not support full SVG 1.1

Frontend

The oCIS frontend service starts all services that handle incoming HTTP requests:

  • ocdav for ownCloud flavoured WebDAV
  • ocs for sharing, user provisioning, capabilities and other OCS API endpoints
  • datagateway for up and downloads
  • TODO: ocm
GET /data/<transfer_token>
GET /da...
PROPFIND /webdav
PROPFIN...
POST /ocs/v1/apps/files_sharing/api/v1/shares
path=/path/to/file
shareType=0
shareWith=<username>
POST /o...
frontend
frontend
datagateway
datagateway
ocdav
ocdav
ocs
ocs
gateway
gateway
storage home | users | ...
storage home | users | ...
storageprovider
storageprovider
dataprovider
dataprovider
GET target URL extracted from transfer token (JWT)
GET tar...
Stat
ListContainer
Stat...
Stat
CreateShare
CreatePublicShare
CreateOCMShare
Stat...
Viewer does not support full SVG 1.1

WebDAV

The ocdav service not only handles all WebDAV requests under (remote.php/)(web)dav but also some other legacy endpoints like status.php:

endpoint service CS3 api CS3 namespace description TODO
ownCloud 10 / current ocis setup:
status.php ocdav - - currently static should return compiled version and dynamic values
(remote.php/)webdav ocdav storageprovider /home the old webdav endpoint
(remote.php/)dav/files/<username> ocdav storageprovider /home the new webdav endpoint
(remote.php/)dav/meta/<fileid>/v ocdav storageprovider id based versions
(remote.php/)dav/trash-bin/<username> ocdav recycle - trash should aggregate the trash of storage spaces the user has access to
(remote.php/)dav/public-files/<token> ocdav storageprovider /public/<token> public links
(remote.php/)dav/avatars/<username> ocdav - - avatars, hardcoded look up from user provider and cache
CernBox setup:
(remote.php/)webdav ocdav storageprovider /
Note: existing folder sync pairs in legacy clients will break when moving the user home down in the path hierarchy
(remote.php/)webdav/home ocdav storageprovider /home
(remote.php/)webdav/users ocdav storageprovider /users
(remote.php/)dav/files/<username> ocdav storageprovider /users/<user_layout>
Spaces concept also needs a new endpoint:
(remote.php/)dav/spaces/<spaceid>/<relative_path> ocdav storageregistry & storageprovider bypass path based namespace and directly talk to the responsible storage provider using a relative path spaces concept needs to point to storage spaces allow accessing spaces, listing is done by the graph api

The correct endpoint for a users home storage space in oc10 is remote.php/dav/files/<username>. In oc10 all requests at this endpoint use a path based reference that is relative to the users home. In oCIS this can be configured and defaults to /home as well. Other API endpoints like ocs and the web UI still expect this to be the users home.

In oc10 we originally had remote.php/webdav which would render the current users home storage space. The early versions (pre OC7) would jail all received shares into a remote.php/webdav/shares subfolder. The semantics for syncing such a folder are not trivially predictable, which is why we made shares freely mountable anywhere in the users home.

The current reva implementation jails shares into a remote.php/webdav/Shares folder for performance reasons. Obviously, this brings back the special semantics for syncing. In the future we will follow a different solution and jail the received shares into a dedicated /shares space, on the same level as /home and /spaces. We will add a dedicated API to list all storage spaces a user has access to and where they are mounted in the users namespace.

TODO rewrite this hint with /dav/spaces Existing folder sync pairs in legacy clients will break when moving the user home down in the path hierarchy like CernBox did. For legacy clients the remote.php/webdav endpoint will no longer list the users home directly, but instead present the different types of storage spaces:

  • remote.php/webdav/home: the users home is pushed down into a new home storage space
  • remote.php/webdav/shares: all mounted shares will be moved to a new shares storage space
  • remote.php/webdav/spaces: other storage spaces the user has access to, e.g. group or project drives

Sharing

The OCS Share API endpoint /ocs/v1.php/apps/files_sharing/api/v1/shares returns shares, which have their own share id and reference files using a path relative to the users home. They API also lists the numeric storage id as well as the string type storage_id (which is confusing … but yeah) which would allow constructing combined references with a storage space id and a path relative to the root of that storage space. The web UI however assumes that it can take the path from the file_target and append it to the users home to access it.

The API already returns the storage id (and numeric id) in addition to the file id:

    <storage_id>home::auser</storage_id>
    <storage>993</storage>
    <item_source>3994486</item_source>
    <file_source>3994486</file_source>
    <file_parent>3994485</file_parent>
    <file_target>/Shared/Paris.jpg</file_target>

Creating shares only takes the path as the argument so creating and navigating shares only needs the path. When you update or delete a share it takes the share id not the file id.

The OCS service makes a stat request to the storage provider to get a ResourceInfo object. It contains both, a ResourceId and an absolute path. If the resource exists a request is sent to the gateway. Depending on the type of share the Collaboration API, the Link API or the Open Cloud Mesh API endpoints are used.

API Request Resource identified by Grant type Further arguments
Collaboration CreateShareRequest ResourceInfo ShareGrant -
Link CreatePublicShareRequest ResourceInfo Link Grant We send the public link name in the ArbitraryMetadata of the ResourceInfo
Open Cloud Mesh CreateOCMShareRequest ResourceId OCM ShareGrant OCM ProviderInfo
The user and public share provider implementations identify the file using the ResourceId. The ResourceInfo is passed so the share provider can also store who the owner of the resource is. The path is not part of the other API calls, e.g. when listing shares. The OCM API takes an id based reference on the CS3 api, even if the OCM HTTP endpoint takes a path argument. @jfd: Why? Does it not need the owner? It only stores the owner of the share, which is always the currently logged in user, when creating a share. Afterwards only the owner can update a share … so collaborative management of shares is not possible. At least for OCM shares.

REVA Storage Registry

The reva storage registry manages the CS3 global namespace: It is used by the reva gateway to look up address and port of the storage provider that should handle a reference.

The storage registry currently maps paths and storageids to the
address:port of the corresponding storage provider
The storage registry currently maps...
storage registry
storage registry
storage providers
storage providers
The gateway uses the storage registry to look up the storage provider that is responsible for path and id based references in incoming requests.
The gateway uses the storage regist...
gateway
gateway
/
/
/
/
Viewer does not support full SVG 1.1