How to Create an Oracle Autonomous Data Warehouse Cloud Service with OAC Integration
This blog outlines how to configure an Oracle Autonomous Data Warehouse (ADW) service, create and load data tables, and connect the database to the Data Visualization service that comes as a part of Oracle Analytics Cloud (OAC).
About Oracle Autonomous Database and Related Solutions
Oracle’s launch of the world’s first Autonomous Database and related cloud services pioneers a next-generation data management platform. Oracle Autonomous Database Cloud uses machine learning to enable automation that eliminates human labor, human error, and manual tuning, to enable unprecedented availability, high performance, and security at a much lower cost over traditional data warehouses.
Key points to note about ADW are:
Provides a fully autonomous database that scales elastically
Self-tunes and optimizes for data warehouse workloads
Requires no database administration (always runs the latest security patches)
Based on Oracle Database 18c, Release 1, Enterprise Edition
Uses artificial intelligence (AI) to deliver high performance, and highly elastic, data management
Upgrades itself while the system is running
Optimizes and pre-configures to create the data warehouse cloud service quickly
Easily migrates on-premises data warehouses or data marts to ADW
Supports all standard SQL and business intelligence (BI) tools and delivers great query performance
How to Provision ADW
This requires a cloud compartment to be created prior to provisioning the ADW instance.
1. Provide a display and a database name.
2. Choose the workload type (in this example, “Datawarehouse”).
3. Choose deployment type (in this example, “Serverless”); if needed, choose a dedicated infrastructure that gets you Exadata with a different pricing option.
Figure 1: Visual representation of how to provision your ADW.
4. Configure the database (provide the CPU count and storage capacity) and choose “Auto Scaling” to support heavier workloads. This option lets the system to expand three times more than the provisioned number of CPUs depending on the workload on the database.
Important note on the “Always Free” option: This lets you create a limited number of “Always Free” Autonomous Databases that do not consume cloud credits. This supports both the “Data Warehouse” and “Transaction Processing” workload types. There are some restrictions, but remember, it’s FREE!
5. Provide the “ADMIN” user password with appropriate license type and, in less than ten minutes, you will create an ADW service.
Figure 2: Another visual representation of how to provision your ADW.
6. Once the service is created, download the client credentials zip file from the “DB Connection” tab and provide the password while downloading the file. This will be used to access the ADW from any client like SQL Developer (Version 18.2 or later, preferred is 19.1 or later), DV, OBIEE, etc.
Figure 3: Visual representation of the “DB Connection” tab location.
Other options like “Service Console”, “Scaling”, “Stopping ADW”, etc. are also available on the “Service” page.
7. Connect to ADW using SQL Developer and create a user with grant to “dwrole” (in some cases, you will need to grant the user tablespace access as well to do data loads).
Change “Connection Type” to “Cloud Wallet” and browse the downloaded “DB Connection” zip file with the required service names (high, low, medium)
Test and save the connection
Figure 4: Visual representation of how to connect to ADW using SQL Developer.
8. After connecting as ADMIN, create another user “ADWC_USER,” grant “dwrole” and tablespace access using below commands:
CREATE USER ADWC_USER identified by <password>;
GRANT dwrole to ADWC_USER;
ALTER USER ADWC_USER QUOTA 100M ON DATA;
GRANT UNLIMITED TABLESPACE TO ADWC_USER;
9. Connect to the “ADWC_USER” in the same way as you connected to “ADMIN.” Create tables in the schema using traditional SQL create statements. In this case, we are creating the two tables “SALES” and “PRODUCTS” with “Primary Key” and “Foreign Key:”
CREATE TABLE SALES (
prod_id NUMBER NOT NULL,
quantity_sold NUMBER(10,2) NOT NULL,
amount_sold NUMBER(10,2) NOT NULL);
CREATE TABLE PRODUCTS (
prod_id NUMBER(6) NOT NULL,
prod_name VARCHAR2(50) NOT NULL,
prod_desc VARCHAR2(4000) NOT NULL,
prod_subcategory VARCHAR2(50) NOT NULL,
prod_subcategory_desc VARCHAR2(2000) NOT NULL,
prod_category VARCHAR2(50) NOT NULL,
prod_category_desc VARCHAR2(2000) NOT NULL,
prod_weight_class NUMBER(3) NOT NULL,
prod_unit_of_measure VARCHAR2(20) ,
prod_pack_size VARCHAR2(30) NOT NULL,
prod_status VARCHAR2(20) NOT NULL,
prod_list_price NUMBER(8,2) NOT NULL,
prod_min_price NUMBER(8,2) NOT NULL,
prod_total VARCHAR2(13) NOT NULL);
ALTER TABLE PRODUCTS
ADD CONSTRAINT products_pk
PRIMARY KEY (prod_id)
RELY DISABLE NOVALIDATE;
ALTER TABLE SALES
ADD CONSTRAINT sales_product_fk
FOREIGN KEY (prod_id) REFERENCES products (prod_id)
RELY DISABLE NOVALIDATE;
10. Load data into these tables in ADW. The data is staged in “OCI Object Storage” (within buckets in a compartment).
Important note: other data load options are available such as Oracle Database tools; Oracle and third-party data integration tools; pulling directly from files on a client computer or from files stored in an OCI object store (recommended); OCI Object Storage Classic, Amazon AWS S3; and Azure. There are also prebuilt packages available in ADW, and one of them is the “PL/SQL DBMS_CLOUD” package which is used to load data in ADW.
In this case, this is done in two steps:
Stores the object store credentials in the ADW schema (token generated from the cloud object store account).
Loads the source file staged in the object store to the database table. The table must already exist in ADW.
All data load operations done using the PL/SQL package “DBMS_CLOUD” are logged in “tables dba_load_operations” and “user_load_operations.”
Creating a credential can be performed using the “DBMS_CLOUD” package:
credential_name => ‘OBJ_STORE_CRED’,
username => ‘email@example.com’,
password => ‘TOKEN’
Here, “Token” is the password for “Object Storage,” which can be generated from the OCI account under “Auth Tokens.”
Figure 5: Visual representation of where to go to authorize the token.
11. To copy data, staged data is uploaded to “Object Storage” where a new bucket is created, and the data files are uploaded to that bucket.
Figure 6: Visual representation of where data is uploaded.
12. Use the PL/SQL package to load data in ADW. The “Copy_data” procedure needs the tablename and credential name created in the above step, the file URI where the data file is stored, and any other formatting-related items. The “PRODUCTS” table can be loaded in the same way:
format => json_object(‘ignoremissingcolumns’ value ‘true’, ‘removequotes’ value ‘true’, ‘dateformat’ value ‘YYYY-MM-DD’)
13. Once the tables are loaded, DV can be used on top of ADW to build visualizations.
The first step is to set up a DV connection against ADW. The wallet file for accessing the ADW database is selected and uploaded.
Figure 7: Visual representation of where to go to setup the ADW connection.
After setting up the connection, import the tables as new data sets in a DV Project. Join the two datasets using “PROD_ID.”
Figure 8: Visual representation of joining two datasets.
Now you are ready to build visuals on top of the data sets! At any point in time, if the data changes in the tables in ADW, you can reload the data in the DV projects so it’s all real-time.
Figure 9: Visual representation of creating DV projects with datasets.
Need help determining how to provision your ADW to create powerful reporting and analysis using Oracle Analytics and BI solutions? Leave us a comment below or email us at firstname.lastname@example.org, and let us know your specific needs and questions.