The decreasing cost and power consumption of intelligent, interconnected, and interactive devices at the edge of the Internet are creating massive opportunities to instrument our cities, factories, farms, and environment to improve efficiency, safety, and productivity. Developing, debugging, deploying, and securing software for the estimated trillion connected devices presents substantial challenges. As part of the SMARTER (Secure Municipal, Agricultural, Rural, and Telco Edge Research) project, Arm has been exploring the use of cloud-native technology and methodologies in edge environments to evaluate their effectiveness at addressing these problems at scale.
Shared infrastructure is one scenario that edge computing needs to address, due to cost sharing, speed of deployment and regulatory limitations. Similarly to cloud computing, an Edge computing infrastructure can improve its usefulness by being able to support multiple users and applications on the same infrastructure. Our approach to creating a shared Edge computing framework borrows from the cloud, where a user allocates resources in each physical node it requests access to, and those resources appear as a virtual node and are managed as such, similar to the way that virtual machines operate in the cloud. This blog post describes how we implemented our solution for sharing edge computing infrastructure and how it can be used. This work is based on our framework, SMARTER, and is also described in the blog series, starting at SMARTER: A Smarter-Device-Manager for Kubernetes on the Edge.
The brokerage for shared Internet of Things (IoT) infrastructure allows each node in a SMARTER Edge infrastructure to be partitioned, where each tenant has full control of a partition. The diagram below shows the brokerage architecture. A set of physical nodes are managed by an instance of SMARTER Node Manager. This physical nodes manager is connected to a SMARTER Brokerage Manager that provides the brokerage Application Programming Interface (API). The Brokerage Manager allows virtual nodes (partitions of physical nodes) to be allocated into physical nodes, with each virtual node behaving as a SMARTER node itself, managed by a separate instance of SMARTER Node Manager (one per tenant).
The Brokerage Manager server supports two types of users, brokers and tenants. A broker is a superuser for the Brokerage Manager and is able to create, modify and delete tenants, and virtual nodes. A tenant has limited access to Brokerage Manager API, only able to manage their virtual nodes (creation, deletion). A tenant SMARTER Node Manager is used to deploy applications on those virtual nodes. A tenant is expected to have full access to their SMARTER Node Manager.
The Brokerage Manager API is a REST API, and is described by this swagger definition. The section “API Notes” describes the API in more detail. The SMARTER Brokerage Manager is stateless, all the API operations interact with objects in the SMARTER Node Manager managing the physical nodes. Each virtual node appears as a pod running on the physical node it is associated with.
Figure 1: SMARTER virtual nodes
Figure 2: SMARTER timeline
A node in a k3s/k8s context is a system that is managed by a k3s/k8s server, where pods can be deployed. In figure 1, there are three SMARTER Node Managers (k3s/k8s servers). The blue is managing the physical infrastructure (Edge nodes), and the red and green managing partitions running on the Edge nodes. They do not have direct access to the Edge nodes themselves, but only to the designated partition. Each partition behaves as a node to k3s/k8s.
The main concepts within the brokerage API are:
Physical nodes are Edge nodes that the broker has enabled to be used by tenant. This allows virtual nodes to be created on that Edge node.
Virtual nodes are partitions on a physical node. They behave as a node for k3s/k8s and are expected to be managed by a separate SMARTER Node Manager. Multiple virtual nodes can coexist on the same physical node.
Broker is a superuser at the brokerage manager that, able to create/delete tenants and enable/disable physical nodes to be allocatable. It is not a requirement for Edge nodes to be managed by the broker.
Tenants are a representation of the user that allocates and uses virtual nodes.
The repository k3s on GitLab SMARTER contains a Docker image that allows k3s to run as a container. Multiple SMARTER Node Managers can be run in a single server if different server ports are used (HOSTPORT). The server configuration is persisted, so restarts of a container will reuse the same data, including k3s objects and credentials. The data is preserved on the directory pointed to by LOCALSERVERDIR. The file k3s-start.sh provides a script that allows the user to start the container with the correct options. Create a copy of the file and change variables HOSTPORT, HOSTIP and SERVERNAME so the script will start the SMARTER Node Manager with the correct configuration for your environment. The SERVERIP should be the IP that the nodes will be connected to. If NAT is used, use the IP of the NAT and not the IP of the server.
The SMARTER Brokerage Manager needs particular credentials to access the SMARTER Node Manager that manages the edge nodes. Those credentials are provided in a kube.config file (KUBECONFIGEDGE). A Docker image can be used to run SMARTER-brokerage. The following command will run the broker, at which point please replace the config file (kube.config) and the correct image tag. Head to the GitLab SMARTER Brokerage repository to read more on this. Please take note that Docker requires the full path of the file. If the file does not exist, or the full path is not provided, Docker will create a directory and pass that directory to the container, preventing the broker from starting correctly.
docker run -d --rm -p 8080:8080 -v <local kubeconfig file>:/config/kube.config registry.gitlab.com/arm-research/smarter/smarter-brokerage:<latest tag>
If the broker is running correctly, the Docker logs should be similar to this message:
* Serving Flask app "__main__" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
SMARTER brokerage is written in Python, and requires flask and Kubernetes Python API. Both can be installed using pip3:
pip3
pip3 install 'connexion[swagger-ui]' pip3 install flask pip3 install kubernetes
To clone this repository, click here.
Insert the credentials that allow access to the SMARTER Node Manager in a file (ex: copy kube.config) and set the environment variable KUBECONFIG to point to the file. You can test the configuration using kubectl. The default location if the KUBECONFIG is not set is “~/.kube/config”.
The SMARTER Brokerage Manager needs the credentials to access the SMARTER Node Manager that manages the Edge nodes. Those credentials are provided in a kube.config file (KUBECONFIGEDGE). The script “execute_test.sh” runs the broker serving the API on localhost port 8080.
export KUBECONFIG=~/.kube/config ./execute_test.sh
The following messages show that the broker is running correctly.
The expected configuration that will be used on this example are:
The directory “test” in the repository contains scripts and yaml templates that can be used to test and provide examples of the API usage. The script test-tenant-creat.sh provides examples to create/delete/read the objects of the API:
The script uses wget to operate and can run on any machine that can access the SMARTER Brokerage tcp port (8080 by default). The script lists the existing tenants, deletes the tenant with ID (TENANTID_USED), and recreates it. The same operations (list, delete and create) are executed for the physicalNode named on the PNODE_USED variable. It then executes the same operations for the node named on the variable VNODE_USED, and the virtual node is created on the physicalNode that was created on the previous operation. The virtual node should register on the SMARTER Node Manager referenced via the variables TENANT_NODE_MANAGER_CONNECTION_INFO and TENANT_NODE_MANAGER_TOKEN with the name VNODE_USED.
wget
The test script test-tenant-creat.sh requires the following variables to be set:
WGET_HOST TENANTID_USED VNODE_USED PNODE_USED TENANT_NODE_MANAGER_CONNECTION_INFO TENANT_NODE_MANAGER_TOKEN TENANT_NODE_MANAGER_KUBECONFIG_FILE
A few considerations to note here. The TENANTID_USED, VNODE_USED PNODE_USED are limited in the character set. They can use lowercase and uppercase alphanumeric, -(dash), _(underscore) and .(dot). Other characters are not acceptable. The values also have to start and finish with an alphanumeric character. The TENANT_NODE_MANAGER_CONNECTION_INFO is the same information that is available on the cluster.server field in the kube.config file.
Next, create a file test-tenant-creat-vars.sh on the test directory with the required variables (the values presented below are only provided as an example, they probably will not work with your configuration):
WGET_HOST=http://127.0.0.1:8080 TENANTID_USED=tenant1 VNODE_USED=vnnode1 PNODE_USED=node31 TENANT_NODE_MANAGER_CONNECTION_INFO="https://192.168.0.1:7443" TENANT_NODE_MANAGER_TOKEN="Kaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa::server:bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" TENANT_NODE_MANAGER_KUBECONFIG_FILE=$(pwd)/temp/kube.tenant1.config
Execute the test-tenant-creat.sh script:
./test/test-tenant-creat.sh
The script is interactive and executable, and will stop before executing a query so the user can check the query and response before moving on to the next query. The script deletes an object before creating one, meaning the delete operation will fail at the first time this script is run since the objects do not exist yet.
This is an example of the expected result:
Checking if kubectl can access the tenant node manager -------------------------------------------------------- Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-27T00:38:11Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8+k3s1", GitCommit:"6b595318666804506c19cfc3d3d228423d38fab1", GitTreeState:"clean", BuildDate:"2020-08-14T20:42:29Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} -------------------------------------------------------- Adding the SMARTER CNI and DNS to the tenant Node Manager -------------------------------------------------------- configmap/smartercniconfigvnode unchanged daemonset.apps/smarter-cni-vnode unchanged configmap/smarterdnscorefilevnode unchanged configmap/smarterdnsconfigvnode unchanged daemonset.apps/smarter-dns-docker-vnode unchanged -------------------------------------------------------- -------------------------------------------------------- wgetsh GET broker /api/v1alpha1/tenants -------------------------------------------------------- Press a key to execute the query, or ctrl-c to stop
The script uses an auxiliary function (wgetsh) to run the wget command. The first parameter is the HTTP method where GET, DELETE and POST are supported. The second parameter is the user role, which can be broker or tenant. The third parameter is the user ID, which for a broker role is empty and for a tenant role is the tenant ID. The forth parameter determines the API endpoint to be used and the fifth parameter provides the data to be used if the HTTP method requires it (POST requires the json data for the object to be created). In this particular example, it executes a HTTP GET (REST API READ) with the role broker and API endpoint /api/v1alpha1/tenants that lists all the existing tenants on the system. The result of the command is shown below after pressing a key:
wgetsh
json
HTTP/1.0 200 OK Content-Type: application/json Content-Length: 67 Server: Werkzeug/1.0.1 Python/3.8.3 Date: Mon, 17 Aug 2020 20:25:44 GMT { "apiVersion": "v1alpha1", "items": [], "kind": "Tenants" } ========================================================= -------------------------------------------------------- wgetsh DELETE broker /api/v1alpha1/tenants/tenant1 -------------------------------------------------------- Press a key to execute the query, or ctrl-c to stop
The results can be read as follows. “HTTP/1.0 200 OK”, the HTTP code 200 indicates that the command was executed successfully. The following lines indicates that the response will be in json with 67 bytes, and information about the server and the date the command was executed. The json result is a armBrokerage.api.core.v1alpha1.TenantList object and that indicates that no tenants are present on the system. Pressing a key should give the result below:
HTTP/1.0 404 NOT FOUND Content-Type: application/json Content-Length: 240 Server: Werkzeug/1.0.1 Python/3.8.3 Date: Mon, 17 Aug 2020 20:53:16 GMT { "apiVersion": "v1alpha1", "code": 404, "details": { "kind": "tenant", "name": "tenant1" }, "kind": "Status", "message": "tenant \"tenant1\" not found", "metadata": {}, "reason": "NotFound", "status": "Failure" } ========================================================= Waiting 30 seconds for all things to be deleted
The result of the query is 404 (Not found), indicating that the object does not exist and the returned object is of the type io.k8s.apimachinery.pkg.apis.meta.v1.Status. The 30 second delay is required when deleting a tenant since it can take a while to remove all the virtual nodes the tenant has running.
The following HTTP queries (REST API requests) will be executed by the script:
REST API read tenant named on the TENANTID_USED variable:
wgetsh GET broker "" /api/v1alpha1/tenants/${TENANTID_USED}
If the tenant TENANTID_USED exists, the script will delete it using the REST API delete tenant and the result should be 200 (OK):
wgetsh DELETE broker "" /api/v1alpha1/tenants/${TENANTID_USED}
If the tenant TENANTID_USED does not exist, the script will execute REST API delete tenant and the result should be 404 (Not Found):
The test script will loop until the tenant does not exists anymore by executing REST API READ tenant TENANTID_USED.
REST API create tenant:
wgetsh POST broker "" /api/v1alpha1/tenants template.json.tmp
REST API read tenants:
wgetsh GET broker "" /api/v1alpha1/tenants
wgetsh GET tenant ${TENANTID_USED} /api/v1alpha1/tenants/${TENANTID_USED}
REST API read physicalNodes:
wgetsh GET broker "" /api/v1alpha1/physicalnodes
REST API read physicalNode PNODE_USED:
PNODE_USED
wgetsh GET broker "" /api/v1alpha1/physicalnodes/${PNODE_USED}
If the physicalNode PNODE_USED exists, the script will delete it using the REST API delete physicalNode, and the result should be 200 (OK).All the virtual nodes running on this physical node will be also deleted.
wgetsh DELETE broker "" /api/v1alpha1/physicalnodes/${PNODE_USED}
If the physicalNode PNODE_USED does not exist, the script will execute REST API delete physicalNode and the result should be 404 (Not Found).
The test script will loop until the physicalNode does not exist anymore by executing REST API READ physicalNode TENANTID_USED.
REST API create physicalNode:
wgetsh POST broker "" /api/v1alpha1/physicalnodes template.json.tmp
REST API read nodes (virtual nodes) that belong to the tenant (TENANTID_USED)
wgetsh GET tenant ${TENANTID_USED} /api/v1alpha1/tenants/${TENANTID_USED}/nodes
If the node VNODE_USED exists, the script will delete it using the REST API delete tenant/node, and the result should be 200 (OK).
wgetsh DELETE tenant ${TENANTID_USED} /api/v1alpha1/tenants/${TENANTID_USED}/nodes/${VNODE_USED}
If the node VNODE_USED does not exist, the script will execute REST API delete tenant/node, and the result should be 404 (Not Found).
The test script will loop until the node does not exist anymore by executing REST API READ tenant/node VNODE_USED.
VNODE_USED
REST API create node VNODE_USED for the tenant (TENANTID_USED):
wgetsh POST tenant ${TENANTID_USED} /api/v1alpha1/tenants/${TENANTID_USED}/nodes template.json.tmp
REST API read node (VNODE_USED) that belong to the tenant (TENANTID_USED):
wgetsh GET tenant ${TENANTID_USED} /api/v1alpha1/tenants/${TENANTID_USED}/nodes/${VNODE_USED}
Add the labels to the node (virtual node VNODE_USED) to install the CNI and Dns. After a while, the node should appear ready at the SMARTER Node Manager for the tenant (TENANTID_USED):
kubectl label node ${VNODE_USED} smarter.cni.vnode=deploy kubectl label node ${VNODE_USED} smarter.cri.vnode=docker
Wait for the node to register at the SMARTER Node Manager for the tenant and be ready - the test script will execute a kubectl get node -w (with wait so type ctrl-c to exit the script):
kubectl get nodes -w
The node is now ready to be used. Pods can be deployed on it through the SMARTER Node Manager for the tenant (TENANTID_USED).
An example of how to use jwt tokens to authenticate the Brokerage API is provided on the directory test/nginx-authenticator. In this example, a NGINX proxy is used to query an authenticator that validates and opens the jwt token, passing the role and accountID found in the token to the Brokerage API. If the jwt token is not valid, NGINX will return an authorization error to the user. An application is provided that creates jwt tokens valid for the system.
jwt
A jwt token is a json object that is cryptographically signed, in the example provided the jwt token contains the role and accountID to be used by the REST API. A jwt token can be generated by the tokencreate.py application on the test/nginx-authenticator.
There are two options to run the nginx-authenticator, using docker and native install.
Create the jwt-authenticator image:
cd test/nginx-authenticator docker build -t jwt-authenticator .
A secret key is needed to create and check the tokens. Any sequence of characters can be used. The following example shows a token created by the image for the user tenant1 and with the role tenant.
docker run --rm -it -e BROKERAGE_SECRET_KEY="sdfhdafjkdhfsdlfdjfh" jwt-authenticator python3 tokencreate.py tenant1
The command result is posted below:
Creating a jwt token for role: tenant accountID: tenant1 token decoded: {'role': 'tenant', 'accountID': 'tenant1'} token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyb2xlIjoidGVuYW50IiwiYWNjb3VudElEIjoidGVuYW50MSJ9.YNhqem0TJXRaaoufsbGXbwgiT9AeTfsch0_C3REgC9s
The token validator uses the same image and the following command starts the process:
docker run --rm -d -p 8000:8000 -e BROKERAGE_SECRET_KEY="sdfhdafjkdhfsdlfdjfh" jwt-authenticator
The command runs in the background to verify if the authenticator is running:
docker logs <container ID>
An example of the correct logs is presented below:
[2020-11-04 18:47:16 +0000] [1] [DEBUG] Current configuration: config: None bind: ['0.0.0.0:8000'] backlog: 2048 workers: 3 worker_class: sync threads: 1 worker_connections: 1000 max_requests: 0 max_requests_jitter: 0 timeout: 30 graceful_timeout: 30 keepalive: 2 limit_request_line: 4094 limit_request_fields: 100 limit_request_field_size: 8190 reload: False reload_engine: auto reload_extra_files: [] spew: False check_config: False preload_app: False sendfile: None reuse_port: False chdir: /code daemon: False raw_env: [] pidfile: None worker_tmp_dir: None user: 0 group: 0 umask: 0 initgroups: False tmp_upload_dir: None secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'} forwarded_allow_ips: ['127.0.0.1'] accesslog: None disable_redirect_access_to_syslog: False access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" errorlog: - loglevel: debug capture_output: False logger_class: gunicorn.glogging.Logger logconfig: None logconfig_dict: {} syslog_addr: udp://localhost:514 syslog: False syslog_prefix: None syslog_facility: user enable_stdio_inheritance: False statsd_host: None dogstatsd_tags: statsd_prefix: proc_name: None default_proc_name: tokenvalidate:api pythonpath: None paste: None on_starting: <function OnStarting.on_starting at 0x7fb328ba61f0> on_reload: <function OnReload.on_reload at 0x7fb328ba6310> when_ready: <function WhenReady.when_ready at 0x7fb328ba6430> pre_fork: <function Prefork.pre_fork at 0x7fb328ba6550> post_fork: <function Postfork.post_fork at 0x7fb328ba6670> post_worker_init: <function PostWorkerInit.post_worker_init at 0x7fb328ba6790> worker_int: <function WorkerInt.worker_int at 0x7fb328ba68b0> worker_abort: <function WorkerAbort.worker_abort at 0x7fb328ba69d0> pre_exec: <function PreExec.pre_exec at 0x7fb328ba6af0> pre_request: <function PreRequest.pre_request at 0x7fb328ba6c10> post_request: <function PostRequest.post_request at 0x7fb328ba6ca0> child_exit: <function ChildExit.child_exit at 0x7fb328ba6dc0> worker_exit: <function WorkerExit.worker_exit at 0x7fb328ba6ee0> nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7fb328bae040> on_exit: <function OnExit.on_exit at 0x7fb328bae160> proxy_protocol: False proxy_allow_ips: ['127.0.0.1'] keyfile: None certfile: None ssl_version: 2 cert_reqs: 0 ca_certs: None suppress_ragged_eofs: True do_handshake_on_connect: False ciphers: None raw_paste_global_conf: [] strip_header_spaces: False [2020-11-04 18:47:16 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2020-11-04 18:47:16 +0000] [1] [DEBUG] Arbiter booted [2020-11-04 18:47:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) [2020-11-04 18:47:16 +0000] [1] [INFO] Using worker: sync [2020-11-04 18:47:16 +0000] [8] [INFO] Booting worker with pid: 8 [2020-11-04 18:47:16 +0000] [9] [INFO] Booting worker with pid: 9 [2020-11-04 18:47:16 +0000] [10] [INFO] Booting worker with pid: 10 [2020-11-04 18:47:16 +0000] [1] [DEBUG] 3 workers
The nginx proxy can be started by the following command:
docker run --rm -d -p 8081:8081 -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf nginx
This example does not use SSL but nginx supports SSL and it can be enabled.
If you have any queries to port 8081, follow this path:
The test-tenant-creat-token.sh script has the same function as the test-tenant-creat.sh script, just using tokens. Two tokens are required, one created with the broker role:
docker run --rm -it -e BROKERAGE_SECRET_KEY="sdfhdafjkdhfsdlfdjfh" jwt-authenticator python3 tokencreate.py
And one with the tenant role and tenant1 accountID:
Figure 3 shows a physical node (nodedemo1) hosting 2 virtual nodes (both are called vnnodedemo1).
Figure 3: SMARTER virtual nodes
The SMARTER Node Manager that is managing the physical nodes sees the virtual nodes as pods named “physical node name” under the namespace “tenantID”.In the shown example the pods will be named vnnodedemo1 in the namespace tenant1 and vnnodedemo1 in the namespace tenant2. The hostname of the virtual node is “node_id”-“tenantID”, so in the example, the hostnames will be vnnodedemo1-tenant1 and vnnodedemo1-tenant2. The pods and containers are only visible to the SMARTER Node Manager that is managing that node (physical or virtual).
Containers running in a virtual node are only visible to containers that are on the same virtual node. If a container needs to provide a service visible to another node, it has to provide it through host networking or HOSTPORT options. The virtual nodes will be accessible by the an- name to any other container (containers on virtual nodes and containers on the physical node).
The order of precedence in name resolution is; virtual node DNS, physical node DNS and outside DNS. So, if the same name exists on the virtual node and on the physical node, only the one on the virtual node will be visible and accessible. Remember that containers in one virtual node are not visible to containers in other virtual or physical nodes.
Brokerage API is expected to be stateless as, whenever possible, all the persistent storage will be done at the k8s objects.
Objects provide Create/Read/Update/Delete (CRUD) interfaces, and Non-named objects provide list interfaces (a full CRUD interface is not provided).
Only CRD (Create, read(list) and delete, no updates for this version).
CRD for admin (operator), R for the tenant itself (it can read its own information).
Quota value? (Maximum amount of resources reserved).
CRD for admin (operator), CRD for the tenant.
CRD for admin (operator), R for the tenant (tenant can read all the physical nodes present on the system).
Full CRUD (add updates).
Follow k8s API model:
The API expects two headers in the request received:
The broker role has full access to the API. The tenant role only has access to objects under its own ID “/api/v1alpha1/tenants/”.
No authentication is provided by the broker, it is assumed that proxy will provide both SSL/TLS and authentication.
SMARTER Brokerage Manager aims to provide support for partitioning an Edge computing infrastructure and allow users to have direct control of where, when and which applications are run. The current version achieves this objective, but improvements and limitations were identified and are described further below.
It is possible to enable access to devices using the SMARTER Device Manager from both the Edge node to the virtual nodes, and from the virtual nodes to the pods running inside the virtual nodes. This is not implemented yet on the Brokerage Manager as some form of allow/deny list may be required. Another limitation of this solution is that is would be static inside the virtual node, so adding devices to the virtual node would require the virtual node to be recreated.
The current isolation model is weak, but it is useful to show that node partitioning is possible. Some form of higher container isolation for the virtual node would improve security. Some of the options that are being evaluated are kata containers and full virtualization using Sel4.
The current implementation does not allow the operator of the edge nodes to reserve resources for use at the edge node level (not available to virtual nodes) or set the maximum amount of resources a virtual node can request. Some of this can be addressed by a more complex admission control model.
In this blog post we describe our proposal of how to provide a shared Edge computing infrastructure. Concepts are borrowed from the cloud infrastructure, primarily virtualization is used to partition where users are provided with a notion of virtual nodes. The cloud infrastructure evolution also shows that this model is powerful and useful, it is also limiting and that more flexible models supporting requirements like dynamic partitioning and shared services will be next frontier.
Questions? Contact Alexandre Ferreira Explore the SMARTER series