Feed aggregator

Database Link to 9.2 Database from 19c

Bobby Durrett's DBA Blog - Fri, 2019-12-13 15:12

I have mentioned in previous posts that I am working on migrating a large 11.2 database on HP Unix to 19c on Linux. I ran across a database link to an older 9.2 database in the current 11.2 database. That link does not work in 19c so I thought I would blog about my attempts to get it to run in 19c. It may not be that useful to other people because it is a special case, but I want to remember it for myself if nothing else.

First, I’ll just create test table in my own schema on a 9.2 development database:

SQL> create table test as select * from v$version;

Table created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE	9.2.0.6.0	Production
TNS for HPUX: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

Next, I will create a link to this 9.2 database from a 19c database. I will hide the part of the link creation that has my password and the database details, but they are not needed.

SQL> create database link link_to_92
... removed for security reasons ...

Database link created.

SQL> 
SQL> select * from test@link_to_92;
select * from test@link_to_92
                   *
ERROR at line 1:
ORA-03134: Connections to this server version are no longer supported.

So I looked up ways to get around the ORA-03134 error. I can’t remember all the things I checked but I have a note that I looked at this one link: Resolving 3134 errors. The idea was to create a new database link from an 11.2 database to a 9.2 database. Then create a synonym on the 11.2 database for the table I want on the 9.2 system. Here is what that looks like on my test databases:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
... removed for brevity ...

SQL> create database link link_from_112
... removed for security ...

Database link created.

SQL> create synonym test for test@link_from_112;

Synonym created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

Now that I have the link and synonym on the 11.2 middleman database, I go back to the 19c database and create a link to the 11.2 database and query the synonym to see the original table:

SQL> select * from v$version;

BANNER                                                                           ...
-------------------------------------------------------------------------------- ...
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production           ...
...										    

SQL> create database link link_to_112
...

Database link created.
...
SQL> select * from v$version@link_to_112;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
...

SQL> select * from test@link_to_112;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

So far so good. I am not sure how clear I have been, but the point is that I could not query the table test on the 9.2 database from a 19c database without getting an error. By jumping through an 11.2 database I can now query from it. But, alas, that is not all my problems with this remote 9.2 database table.

When I first started looking at these remote 9.2 tables in my real system, I wanted to get an execution plan of a query that used them. The link through an 11.2 database trick let me query the tables but not get a plan of the query.

SQL> truncate table plan_table;

Table truncated.

SQL> 
SQL> explain plan into plan_table for
  2  select * from test@link_to_112
  3  /

Explained.

SQL> 
SQL> set markup html preformat on
SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

SQL> 
SQL> select object_name from plan_table;

OBJECT_NAME
------------------------------------------------------------------------------

TEST

Kind of funky but not the end of the world. Only a small number of queries use these remote 9.2 tables so I should be able to live without explain plan. Next, I needed to use the remote table in a PL/SQL package. For simplicity I will show using it in a proc:

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test@link_to_112;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

I tried creating a synonym for the remote table but got the same error:

SQL> create synonym test92 for test@link_to_112;

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

Finally, by chance I found that I could use a view for the remote synonym and the proc would compile:

SQL> create view test92 as select * from test@link_to_112;

View created.

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Procedure created.

SQL> SHOW ERRORS;
No errors.
SQL> 
SQL> execute bobbytest;

PL/SQL procedure successfully completed.

SQL> show errors
No errors.

Now one last thing to check. Will the plan work with the view?

SQL> explain plan into plan_table for
  2  select * from test92
  3  /

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

Sadly, the view was not the cure all. So, here is a summary of what to do if you have a procedure on a 19c database that needs to access a table on a 9.2 database:

  • Create a link on a 11.2 database to the 9.2 database
  • Create a synonym on the 11.2 database pointing to the table on the 9.2 database
  • Create a link on the 19c database to the 11.2 database
  • Create a view on the 19c database that queries the synonym on the 11.2 database
  • Use the view in your procedure on your 19c database
  • Explain plans may not work with SQL that use the view

Bobby

Categories: DBA Blogs

Updating the trail file location for Oracle GoldenGate Microservices

DBASolved - Fri, 2019-12-13 09:21

When you first install Oracle GoldenGate Microservices, you may have taken the standard installation approach and all the configuration, logging and trail file information will reside in a standard directory structure.  This makes the architecture of your enviornment really easy.   Let’s say you want to identify what trail files are being used by the […]

The post Updating the trail file location for Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90

Oracle Press Releases - Thu, 2019-12-12 15:00
Press Release
Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90 Fusion ERP Cloud Revenue Up 37%; Autonomous Database Cloud Revenue Up >100%

Redwood Shores, Calif.—Dec 12, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2020 Q2 results. Total Revenues were $9.6 billion, up 1% in USD and in constant currency compared to Q2 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $1.1 billion.

GAAP Operating Income was up 3% to $3.2 billion, and GAAP Operating Margin was 33%. Non-GAAP Operating Income was $4.0 billion, and non-GAAP Operating Margin was 42%. GAAP Net Income was $2.3 billion, and non-GAAP Net Income was $3.0 billion. GAAP Earnings Per Share was up 14% to $0.69, while non-GAAP Earnings Per Share was up 12% to $0.90.

Short-term deferred revenues were $8.1 billion. Operating Cash Flow was $13.8 billion during the trailing twelve months.

“We had another strong quarter in our Fusion and NetSuite cloud applications businesses with Fusion ERP revenues growing 37% and NetSuite ERP revenues growing 29%,” said Oracle CEO, Safra Catz. “This consistent rapid growth in the now multibillion dollar ERP segment of our cloud applications business has enabled Oracle to deliver a double-digit EPS growth rate year-after-year. I fully expect we will do that again this year.”

“It’s still early days, but the Oracle Autonomous Database already has thousands of customers running in our Gen2 Public Cloud,” said Oracle CTO, Larry Ellison. “Currently, our Autonomous Database running in our Public Cloud business is growing at a rate of over 100%. We expect that growth rate to increase dramatically as we release our Autonomous Database running on our Gen2 Cloud@Customer into our huge on-premise installed base over the next several months.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on January 9, 2020, with a payment date of January 23, 2020.

Q2 Fiscal 2020 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q2 results and fiscal 2020 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 4597628.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our earnings per share and our Autonomous Database business, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our success depends upon our ability to develop new products and services, integrate acquired products and services and enhance our existing products and services. (2) Our cloud strategy, including our Oracle Software-as-a-Service and Infrastructure-as-a-Service offerings, may adversely affect our revenues and profitability. (3) We might experience significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged, the IT services we provide to our customers could be disrupted, and customers may stop using our products and services, all of which could reduce our revenue and earnings, increase our expenses and expose us to legal claims and regulatory actions. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) Acquisitions present many risks and we may not achieve the financial and strategic goals that were contemplated at the time of a transaction. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 12, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Oracle Press Releases - Wed, 2019-12-11 07:00
Press Release
Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Redwood Shores, Calif.—Dec 11, 2019

Baltimore Gas & Electric (BGE) has launched a digital experience pilot for thousands of Baltimore residents who pay on and off peak rates for electricity. BGE is using Oracle Utilities Opower Behavioral Load Shaping Cloud Service to engage customers with a proactive, personalized experience designed to help them save on their utility bills. The new service encourages customers to shift their biggest everyday energy loads, such as running energy-intensive appliances and electric vehicle charging, to off peak times. With these tips, BGE customers can save money while helping reduce daily peak energy demand and supporting a cleaner, healthier grid.

“We know on peak and off peak rates can seem complex, and we have a responsibility to offer excellent service to customers who choose them,” commented Mark Case, VP of regulatory policy and strategy at BGE. “With this new service from Opower, we can deliver a better experience for these customers by helping them shift their energy load for improved power affordability and reliability, all while reducing emissions.”

Learn more about the new Opower Behavioral Load Shaping Service here.

Peak pricing programs have not traditionally provided the ongoing, personalized outreach customers need to help them shift their energy use and benefit from lower off-peak rates. Years of public evaluation data show programs that offered some outreach only left customers wanting more. With machine learning, user experience design, and customer engagement automation, Opower is reshaping this equation.

With Opower, BGE is providing residents new insight into how small behavior changes can create significant bill savings. Enrolled customers began receiving weekly digital communications that help them understand how their on and off peak rates work. Each customer receives continually evolving content like week-over-week spending comparisons, personalized information about their on and off peak spending, and adaptive, intelligent recommendations for shifting their largest energy loads in order to save money.

“On and off peak rates are nothing new—our industry has been implementing them for decades. Program evaluators have found again and again that customers with peak pricing are eager for better insights into their energy usage and their bills,” noted Dr. Ahmad Faruqui, principal and energy economist with The Brattle Group. “What’s new and different is the way in which enabling technologies boost customer awareness and price responsiveness. BGE and Opower are putting those learnings into practice and employing a smart experimental design that will expand our industry’s body of knowledge.”

BGE and Opower are running the program as a randomized control trial in order to yield novel, statistically significant peak pricing pilot results. Throughout the trial, BGE and Opower will be isolating and measuring the impact of the customer experience itself—discretely from the peak price signal—on bill savings, customer satisfaction, peak demand, and adoption of BGE programs and products that can help customers save even more. The trial started in Summer 2019. 

Several additional utilities in the U.S. are running the Opower Behavioral Load Shaping service this year. This is the fourth new product released by Opower recently, in addition to hundreds of new customer engagement features for utilities and their customers. Opower is the world’s most widely deployed utility customer engagement platform, providing energy data analytics on over two trillion meter reads and powering the utility customer experience for more than 60 million households.

Contact Info
Kristin Reeves
Oracle Corporation
+1 925 787 6744
kris.reeves@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1 925 787 6744

Wendy Wang

  • +1 979 216 8157

Updating parameter files from REST

DBASolved - Tue, 2019-12-10 10:12

One of the most important and time consuming things to do with Oracle GoldenGate is to build parameter files for the GoldenGate processes.  In the past, this required you to access GGSCI and run commands like: GGSCI> edit params <process group> After which, you then had to bounce the process group for the changes to […]

The post Updating parameter files from REST appeared first on DBASolved.

Categories: DBA Blogs

How to optimize a campaign to get the most out of mobile advertising

VitalSoftTech - Tue, 2019-12-10 09:54

  When marketing for a campaign, we must optimize it in the best way possible to get the most out of it. Otherwise, it is just advertising revenue going to waste. Same goes for mobile advertising. We are here to discuss the best mobile ad strategies. However, before we start, here is a question for […]

The post How to optimize a campaign to get the most out of mobile advertising appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Oracle Press Releases - Tue, 2019-12-10 09:00
Press Release
Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Indian Wells, Calif.—Dec 10, 2019

The Oracle Challenger Series today announced its return to Southern California for two events in early 2020. The third stop of the 2019-2020 series takes place at the Newport Beach Tennis Club on January 27 – February 2. The Indian Wells Tennis Garden hosts the final tournament on March 2-8.

Now in its third year, the Oracle Challenger Series helps up-and-coming American players secure both ranking points and prize money in the United States. The two American men and two American women who accumulate the most points over the course of the Challenger Series receive wild cards into the singles main draws at the BNP Paribas Open in Indian Wells. As part of the Oracle Challenger Series’ mission to grow the sport and make professional tennis events more accessible, each tournament is free and open to the public.

The Newport Beach and Indian Wells events will conclude the 2019-2020 Road to Indian Wells and are instrumental in determining which American players receive wild card berths at the 2020 BNP Paribas Open. At the halfway point of the Challenger Series, Houston champion Marcos Giron holds the top spot for the men. Usue Arconada is in first place for the women following an impressive showing in New Haven with finals appearances in both singles and doubles. Trailing just behind them are Tommy Paul, the men’s champion in New Haven, and CoCo Vandeweghe, the women’s runner-up in Houston.

The Newport Beach event has propelled its champions to career-defining seasons over the previous two years. Americans Taylor Fritz and Danielle Collins began their steady climb up the world rankings by capturing the titles at the 2018 inaugural event. Bianca Andreescu’s 2019 title marked the beginning of her meteoric rise to WTA stardom. Likewise, the Indian Wells event has featured some of the Challenger Series’ strongest player fields and produced champions Martin Klizan, Sara Errani, Kyle Edmund and Viktorija Golubic.

The Newport Beach tournament will also feature the Oracle Champions Cup which takes place on Saturday, February 1. Former World No. 1 and 2003 US Open Champion Andy Roddick; 10-time ATP Tour titlist and former World No. 4 James Blake; 2004 Olympic silver medalist and 6-time ATP Tour singles winner Mardy Fish; and 2005 US Open semifinalist Robby Ginepri headline the one-night tournament. The event consists of two one-set semifinals with the winners meeting in a one-set championship match.

Tickets to the Oracle Champions Cup go on-sale to the general public on Tuesday, December 17. Special VIP packages including play with the pros, special back-stage access and an exclusive player party are also available.

For more information about the Oracle Challenger Series visit oraclechallengerseries.com, and be sure to follow @OracleChallngrs on Twitter and @OracleChallengers on Instagram. To inquire about volunteer opportunities, including becoming a ball kid, please email oraclechallengerseries@desertchampions.com.

Contact Info
Mindi Bach
Oracle
mindi.bach@oracle.com
About the Oracle Challenger Series

The Oracle Challenger Series was established to help up-and-coming American tennis players secure both ranking points and prize money. The Oracle Challenger Series is the next chapter in Oracle’s ongoing commitment to support U.S. tennis for men and women at both the collegiate and professional level. The Challenger Series features equal prize money in a groundbreaking tournament format that combines the ATP Challenger Tour and WTA 125K Series.

The Oracle Challenger Series offers an unmatched potential prize of wild cards into the main draw of the BNP Paribas Open, widely considered the top combined ATP Tour and WTA professional tennis tournament in the world, for the top two American male and female finishers.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The Global Oracle APEX Community Delivers. Again.

Joel Kallman - Mon, 2019-12-09 16:59

Oracle was recently recognized as a November 2019 Gartner Peer Insights Customers’ Choice for Enterprise Low-Code Application Platform Market for Oracle APEX.  You can read more about that here.

I personally regard this a distinction for the global Oracle APEX community.  We asked for your assistance by participating in these reviews, and you delivered.  Any time we've asked for help or feedback, the Oracle APEX community has selflessly and promptly responded.  You have always been very gracious with your time and energy.

I was telling someone recently how I feel the Oracle APEX community is unique within all of Oracle, but I also find it to be unique within the industry.  It is the proverbial two-way partnership that many talk about but rarely live through their actions.  We remain deeply committed to our customers' personal and professional success - it is a mindset which permeates our team.  We are successful only when our customers and partners are successful.

Thank you to all who participated in the Gartner Peer Insights reviews - customers, partners who nudged their customers, and enthusiasts.  You, as a community, stand out amongst all others.  We are grateful for you.

Oracle Names Vishal Sikka to the Board of Directors

Oracle Press Releases - Mon, 2019-12-09 15:15
Press Release
Oracle Names Vishal Sikka to the Board of Directors

Redwood Shores, Calif.—Dec 9, 2019

Oracle (NYSE: ORCL) today announced that Dr. Vishal Sikka, founder and CEO of the AI company Vianai Systems, has been named to Oracle’s Board of Directors.  Before starting Vianai, Vishal was a top executive at SAP and the CEO of Infosys.

“The digital transformation of an enterprise is enabled by the rapid adoption of modern cloud applications and technologies,” said Oracle CEO Safra Catz. “Vishal clearly understands how Oracle’s Gen2 Cloud Infrastructure, Autonomous Database and Applications come together in the Oracle Cloud to help our customers drive business value and adapt to change. I am very happy that he will be joining the Oracle Board.”

“For years, the Oracle Database has been the heartbeat and life-blood of every large and significant organization in the world,” said Dr. Vishal Sikka. “Today, Oracle is the only one of the big four cloud companies that offers both Enterprise Application Suites and Secure Infrastructure technologies in a single unified cloud. Oracle’s unique position in both applications and infrastructure paves the way for enormous innovation and growth in the times ahead. I am excited to have the opportunity to join the Oracle Board, and be part of this journey.”

“Vishal is one the world’s leading experts in Artificial Intelligence and Machine Learning,” said Oracle Chairman and CTO Larry Ellison. “These AI technologies are key foundational elements of the Oracle Cloud’s Autonomous Infrastructure and Intelligent Applications. Vishal’s expertise and experience makes him ideally suited to provide strategic vision and expert advice to our company and to our customers. He is a most welcome addition to the Oracle Board.”

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Statement

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 9, 2019. Oracle undertakes no duty to update any statement in light of new information or future events.

Oracle Health Sciences Participates in TOP Tech Sprint

Oracle Press Releases - Mon, 2019-12-09 07:00
Press Release
Oracle Health Sciences Participates in TOP Tech Sprint Could enable the use of open data and AI to match cancer patients with clinical trials and experimental therapies

Redwood Shores, Calif.—Dec 9, 2019

Clinical trials are an essential gateway for getting new cures to market. However, many patients struggle to find the right trials that meet their unique medical requirements. To explore better ways to match patients with the right trials, Oracle Health Sciences is once again participating in The Opportunity Project (TOP) Technology Sprint: Creating the Future of Health.

This year’s entry joins Oracle technology with de-identified precision oncology open data sets from the United States Department of Veterans Affairs and the National Cancer Institute. The demo will highlight how Artificial Intelligence (AI) and customer experience solutions could be used to connect cancer patients with available clinical trials and experimental therapies.

“It is paramount that we collaborate with our peers within the federal government and technology communities to collectively evaluate what innovative opportunities exist and to explore the potential applications AI and machine learning can offer to fight deadly diseases such as cancer,” said Steve Rosenberg, senior vice president and general manager, Oracle Health Sciences. “The opportunity to participate in the TOP challenge lets us apply Oracle solutions in new ways while also harnessing the learnings to benefit the lives of patients who need treatment in the future.”

Connecting Patients with Critical Trials

This year Oracle’s entry builds on the last technology sprint by leveraging open datasets to explore more deeply the applications of machine learning (ML) and AI. In addition, it demonstrates how features for prospective trial recruitment will work with appropriate identity protection.

Oracle’s submission uses a combination of Oracle Healthcare Foundation, Oracle CX Service, Oracle Policy Automation, Oracle Digital Assistant and Oracle Labs PGX: Parallel Graph AnalytiX solutions to create a demonstration that in the future might enable connecting patients and clinical staff through intuitive interfaces that provide data at the point of care. A graphical interface would allow physicians to track a patient’s care journey and would indicate which clinical trial options are available. It applies AI to standardize data from clinical trial requirement forms to specify eligibility criteria. The result can be a more simplified and personalized experience to help determine the best treatment for patients. Patients can also keep their identifying information from being shared, while allowing only their de-identified clinical data to be made available so they can receive information about new programs, clinical studies or therapies that may be of value to their care.

TOP is a 12-week technology development sprint that brings together technology developers, communities, and government to solve real-world problems using open data. TOP will host its Demo Day 2019 on December 10, 2019 at the U.S. Census Bureau in Suitland, MD.

Contact Info
Judi Palmer
Oracle
+1 650.784.4119
judi.palmer@oracle.com
Rick Cohen
Blanc & Otus
+1 212.885.0563
rick.cohen@blancandotus.com
About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 30 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650.784.4119

Rick Cohen

  • +1 212.885.0563

Teri Meri Prem Kahani Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:13

My Son Dharun Performing at Improviser Music Studio

Teri Meri Prem Kahani Cover

DarkSide Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:11
My Son Dharun Performing at Improviser Music Studio

DarkSide Cover

Installing Oracle 19c on Linux

Pete Finnigan - Sat, 2019-12-07 20:53
I needed to create a new 19c install yesterday for a test of some customer software and whilst I love Oracle products I have to say that installing the software and database has never been issue free and simple over....[Read More]

Posted by Pete On 06/12/19 At 04:27 PM

Categories: Security Blogs

Gather Stats while doing a CTAS

Tom Kyte - Fri, 2019-12-06 17:53
Can you please provide your opinion on the below point. This is what I have noticed. When we create a table using a CTAS, and then check the user_Tables, the last_analyzed and num_rows column is already populated with accurate data. If it is so, ...
Categories: DBA Blogs

How can application control to explicitly call OCIStmtPrepare2 rather than OCIStmtPrepare when using pro*C

Tom Kyte - Fri, 2019-12-06 17:53
Our application got an ORA-25412: transaction replay disabled by call to OCIStmtPrepare. Oracle Version: 12.2. The Oracle runs in RAC mode. After searched on the internet, we found below explanation: <i>This call(OCIStmtPrepare) does no...
Categories: DBA Blogs

'BEFORE CREATE ON SCHEMA' trigger apparently not firing before Create Table

Tom Kyte - Fri, 2019-12-06 17:53
In Oracle 8.1.7 instance set up with characterset US7ASCII <code> Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production With the Partitioning option JServer Release 8.1.7.4.0 - Production SQL> create table t1 (c1 varchar2...
Categories: DBA Blogs

Left Joining Four Tables without duplicates from right tables or Cartesian product!

Tom Kyte - Fri, 2019-12-06 17:53
I am running the query below to get data from 4 tables, but the problem that data is fetched as Cartesian product, in other words, items from tblEdu is being duplicated with items from tblTrain <code> SELECT tblpersonal.*, tbltrain.*, tbledu.*,...
Categories: DBA Blogs

Temp space

Jonathan Lewis - Fri, 2019-12-06 06:18

A question about hunting down the source of the error “ORA-01652 unable to extend temp segment by NNN in tablespace XXX” shows up on the Oracle-L mailing list or the Oracle developer community forum from time to time. In most cases the tablespace referenced is the temporary tablespace, which means the session reporting the error was probably trying to allocate some space for sorting, or doing a hash join, or instantiating a GTT (global temporary table) or a CTE (common table expression / “with” subquery). The difficulty in cases like this is that the session reporting the error might be the victim of some other session’s greed – so looking at what the session was doing won’t necessarily point you to the real problem.

Of course you then run into a further problem tracking down the source of the problem. By the time you hear the complaint (even if it’s only seconds after the error appeared) the session that had been hogging the temporary tablespace may have finished what it was doing, leaving a huge amount of free space in the temporary tablespace and suggesting (to the less experienced and cynical user) that there’s something fundamentally wrong with the way Oracle has been accounting for space usage.

If you find yourself in this situation remember that (if you’re licensed to take advantage of it) the active session history may be able to help.  One of the columns in v$active_session_history is called temp_space_allocated with the slightly misleading description: “Amount of TEMP memory (in bytes) consumed by this session at the time this sample was taken”. A simple query against v$active_session_history may be enough to identify the session and SQL  statement that had been holding the temporary space when the error was raised, for example:


column pga_allocated        format 999,999,999,999
column temp_space_allocated format 999,999,999,999

break on session_id skip 1 on session_serial#

select
        session_id, session_serial#, 
        sample_id, 
        sql_id, 
        pga_allocated,
        temp_space_allocated
from
        v$active_session_history
where
        sample_time between sysdate - 5/1440 and sysdate
and     nvl(temp_space_allocated,0) != 0
order by
        session_id, sample_id
/

All I’ve done for this example is query v$active_session_history for the last 5 minutes reporting a minimum of information from any rows that show temp space usage. As a minor variation on the theme you can obviously change the time range, and you might want to limit the output to rows reporting more than 1MB (say) of temp space usage.

You’ll notice that I’ve also reported the pga_allocated (Description: Amount of PGA memory (in bytes) consumed by this session at the time this sample was taken) in this query; this is just a little convenience – a query that’s taking a lot of temp space will probably start by acquiring a lot of memory so it’s nice to be able to see the two figures together.

There are plenty of limitations and flaws in the usefulness of this report and I’ll say something about that after showing an example of usage. Let’s start with a script to build some data before running a space-consuming query:


rem
rem     Script:         allocate_tempspace.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 as 
select * from all_objects
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

execute dbms_stats.gather_table_stats(null,'t1')

execute dbms_lock.sleep(20)

set pagesize  60
set linesize 255
set trimspool on
set serveroutput off
alter session set statistics_level = all;

with ttemp as (
        select /*+ materialize */ * from t1
)
select 
        /*+ no_partial_join(@sel$2 t1b) no_place_group_by(@sel$2) */ 
        t1a.object_type,
        max(t1a.object_name)
from
        ttemp t1a, ttemp t1b
where
        t1a.object_id = t1b.object_id
group by
        t1a.object_type
order by
        t1a.object_type
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

My working table t1 consists of 16 copies of the view all_objects – so close to 1 million rows in my case – and the query is hinted to avoid any of the clever transformations that the optimizer could use to reduce the workload so it’s going to do a massive hash join and aggregation to report a summary of a couple of dozen rows. Here’s the execution plan (in this case from 12.2.0.1, though the plan is the same for 19.3 with some variations in the numbers).


SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D665B_E2772D3 |      1 |        |      0 |00:00:01.51 |   28915 |      0 |   9217 |  2068K|  2068K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    989K|    989K|00:00:00.24 |   19551 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     29 |     29 |00:00:08.51 |   18493 |  21345 |   2910 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     15M|     15M|00:00:03.93 |   18493 |  21345 |   2910 |    48M|  6400K|   65M (1)|   25600 |
|   6 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.36 |    9233 |   9218 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.35 |    9233 |   9218 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.40 |    9257 |   9217 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.39 |    9257 |   9217 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")

Critically this plan shows us two uses of the temp space but only reports one of them as Used-Tmp. The “hash join” at operation 5 tells us that it reached 65MB of (tunable PGA) memory before going “1-pass”, eventually dumping 25,600 KB to disc. This space usage is corroborated by the 2,910 writes (which, at an 8KB block size, would be 23,280 KB). The missing Used-Tmp, however, is the space taken up by the materialized CTE. We can see that operation 2 is a “load as select” that writes 9,217 blocks to disc (subsequently read back twice – the tablescans shown in operations 7 and 9). That’s  74,000 KB of temp space that doesn’t get reported Used-Tmp.

If we take a look at the plan from 19.3 we see different numbers, but the same “error of omission”:

SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6624_E259E68 |      1 |        |      0 |00:00:01.26 |   23706 |      0 |   5593 |  2070K|  2070K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    907K|    907K|00:00:00.21 |   18024 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     25 |     25 |00:00:06.89 |   11193 |  13843 |   2655 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     14M|     14M|00:00:03.55 |   11193 |  13843 |   2655 |    44M|  6400K|   64M (1)|      23M|
|   6 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.26 |    5598 |   5594 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.25 |    5598 |   5594 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.34 |    5595 |   5594 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.33 |    5595 |   5594 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")


With slightly fewer rows in t1 (907K vs. 989K) we write 5,593 blocks for the materialized CTE  (instead of 9,217) and spill 2,655 blocks during the hash join (instead of 2,910). But again it’s only the hash join spill that is reported under Used-Tmp. Note, by the way, that the Used-Tmp in 12.2 was reported in KB when it’s reported in MB in 19.3.0.0.

Side note: comparing the number of rows created and blocks written for the CTE, it looks as if 19.3 is using the data blocks much more efficiently than 12.2. There’s no obvious reason for this (though a first guess would be that the older mechanism is to write a GTT with pctfree=10 while the new avoid any free space and transactional details) so, as ever, I now have another draft for a blog note reminding me to investigate (eventually) what differences there are in CTE storage on the upgrade. It’s something that might make a difference in a few special cases.

With the figures from the execution plans in mind we can now look at the results of the query against v$active_session_history. Conveniently the queries took a few seconds to complete, so we’re going to see several rows for each execution.

First the results from 12.2.0.1

SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
        14           22234   15306218 1cwabt12zq6zb       95,962,112            1,048,576
                             15306219 1cwabt12zq6zb       97,731,584           37,748,736
                             15306220 1cwabt12zq6zb      148,194,304           77,594,624
                             15306221 1cwabt12zq6zb      168,117,248           85,983,232
                             15306222 1cwabt12zq6zb      168,117,248           90,177,536
                             15306223 1cwabt12zq6zb      168,117,248           95,420,416
                             15306224 1cwabt12zq6zb      168,117,248           98,566,144
                             15306225 1cwabt12zq6zb      168,117,248          102,760,448
                             15306226 1cwabt12zq6zb      116,933,632          103,809,024
                             15306227 1cwabt12zq6zb      116,933,632          103,809,024
                             15306228 b66ycurnwpgud        8,602,624            1,048,576

I pointed out that we had 25,600 KB reported as Used-Tmp and roughly 74,000 KB unreported – a total of nearly 100,000 KB that is reasonably close to the 103,800,000 bytes reported by ASH. Moreover the timing of the plan (loading the CTE in the first 2 seconds) seems to agree with the growth to 77,590,000 of temp_space_allocated by the time we get to sample_id 15306220 in ASH. Then we have several seconds of slow growth as the hash join takes place and feeds its resulte up to the sort group by. At the end of the query we happen to have been lucky enough to catch one last sample just before the session had released all its temp space and ceased to be active.  (Note: however, that the sql_id at that sample point was not the sql_id of our big query – and that’s a clue about one of the limitations of using ASH to find the greedy SQL.)

We see the same pattern of behaviour in 19.3.0.0:


SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
       136           42767    2217500 1cwabt12zq6zb      143,982,592           46,137,344
                              2217501 1cwabt12zq6zb      193,527,808           54,525,952
                              2217502 1cwabt12zq6zb      193,527,808           57,671,680
                              2217503 1cwabt12zq6zb      193,527,808           61,865,984
                              2217504 1cwabt12zq6zb      197,722,112           67,108,864
                              2217505 1cwabt12zq6zb      150,601,728           70,254,592
                              2217506 1cwabt12zq6zb      150,601,728           70,254,592

We start with an almost instantaneous jump to 46MB of temp_space_allocated in the first second of the query – that’s the 5,593 blocks of the CTE being materialized, then the slow growth of temp space as the hash join runs, spills to disc, and passes its data up to the sort group by. Again we can see that the peak usage was the CTE (46MB) plus the reported spill of 23MB (plus rounding errors and odd bits).

Preliminary Observations

Queries against ASH (v$active_session_history) can show us sessions that were holding space in the temporary tablespace at the moment a sample of active sessions was taken. This may allow us to identify greedy sessions that were causing other sessions to fail with ORA-01652 (unable to allocate temp segment).

We have seen that there is at least one case where we get better information about temp space allocation from ASH than we do from the variants on v$sql_plan that include the SQL Workarea information (v$sql_workarea, v$sql_workarea_active) because the space acquired during materialization of CTEs is not reported as a “tunable SQL workarea” but does appear in the ASH temp_space_allocated.

At first sight it looks as if we may be able to use the query against ASH to identify the statement (by sql_id) that was the one being run by the greedy session when it consumed all the space. As we shall see in a further article, there are various reasons why this may over-optimistic, however in many cases there’s a fair chance that when you see the same sql_id appearing in a number of consecutive rows of the report then that statement may be the thing that is responsible for the recent growth in temp space usage – and you can query v$sql to find the text and call dbms_xplan.display_cursor() to get as much execution plan information as possible.

Further questions
  • When does a session release the temp_space_allocated? Will the space be held (locked) as long as the cursor is open, or can it be released when it is no longer needed? Will it be held, but releasable, even after the cursor has (from the client program’s perspective) been closed?
  • Could we be fooled by a report that said a session was holding a lot of space when it didn’t need it and would have released it if the demand had appeared?
  • Under what conditions might the temp_space_allocated in an ASH sample have nothing to do with the sql_id reported in the same sample?
  • Are there any other reasons why ASH might report temp_space_allocated when an execution plan doesn’t?
  • Is temp_space_allocated only about the temporary tablespace, or could it include informatiom about other (“permanent”) tablespaces ?

Stay tuned for further analysis of the limitations of using v$active_session_history.temp_space_allocated to help identify the srouce of a space management ORA-01652 issue.

 

 

Machine Learning and Spatial for FREE in the Oracle Database

Rittman Mead Consulting - Fri, 2019-12-06 04:34
Machine Learning and Spatial for FREE in the Oracle Database

Last week at UKOUG Techfest19 I spoke a lot about Machine Learning both with Oracle Analytics Cloud and more in depth in the Database with Oracle Machine Learning together with Charlie Berger, Oracle Senior Director of Product Management.

Machine Learning and Spatial for FREE in the Oracle Database

As mentioned several times in my previous blog posts, Oracle Analytics Cloud provides a set of tools helping Data Analysts start their path to Data Science. If, on the other hand, we're dealing with experienced Data Scientists and huge datasets, Oracle's proposal is to move Machine Learning where the data resides with Oracle Machine Learning. OML is an ecosystem of various options to perform ML with dedicated integration with Oracle Databases or Big Data appliances.

Machine Learning and Spatial for FREE in the Oracle Database

One of the most known branches is OML4SQL which provides the ability of doing proper data science directly in the database with PL/SQL calls! During the UKOUG TechFest19 talk Charlie Berger demoed it using a collaborative Notebook on top of an Autonomous Data Warehouse Cloud.

Machine Learning and Spatial for FREE in the Oracle Database

Both Oracle ADW and ATP include OML by default at no extra cost. This wasn't true for all the other database offerings in cloud or on-premises which required an additional option to be purchased (the Advanced Analytics one for on-premises deals). The separate license requirement was obviously something that limited the spread of this functionality, but, I'm happy to say that it's going away!

Oracle's blog post yesterday announced that:

As of December 5, 2019, the Machine Learning (formerly known as Advanced Analytics), Spatial and Graph features of Oracle Database may be used for development and deployment purposes with all on-prem editions and Oracle Cloud Database Services. See the Oracle Database Licensing Information Manual (pdf) for more details.

What this means is that both features are included for FREE within the Oracle Database License! Great news for both Machine Learning as well as Graph Databases fans! The following tweet from Dominic Giles (Master Product Manager for the Oracle DB) provides a nice summary of the licenses including the two options for the Oracle DB 19c.

The #Oracle Database now has some previously charged options added to the core functionality of both Enterprise Edition and Standard Edition 2. Details in the 19c licensing guide with more information to follow. pic.twitter.com/dqkRRQvWq2

— dominic_giles (@dominic_giles) December 5, 2019

But hey, this license change effects also older versions starting from the 12.2, the older one still in general support! So, no more excuses, perform Machine Learning where your data is: in the database with Oracle Machine Learning!

Categories: BI & Warehousing

Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report

Oracle Press Releases - Thu, 2019-12-05 07:00
Press Release
Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report Oracle also named a Leader in 2019 Gartner Magic Quadrant for Operational Database Management Systems, recognized in every report published since 2013

Redwood Shores, Calif.—Dec 5, 2019

Oracle today announced that it has been recognized in two newly released Gartner database reports. Oracle was ranked first in all four use cases of the 2019 Gartner “Critical Capabilities for Operational Database Management Systems” report1 and was named a Leader in Gartner’s 2019 “Magic Quadrant for Operational Database Management Systems” report2.

The self-driving Oracle Autonomous Database eliminates complexity, human error, and manual management to enable highest reliability, performance, and security at low cost.

“We believe Oracle’s placement in Gartner’s reports demonstrates our continued leadership in the database market and our commitment to innovation across our data management portfolio,” said Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle. “Oracle continues to deliver unprecedented performance, reliability, security, and new cutting-edge technology via our cloud and on-premises offerings.”

Oracle believes it was positioned as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems for its continued innovation across its database management portfolio. The Oracle Autonomous Database is available in the cloud and will be available for on-premises deployment soon through its Oracle Generation 2 Cloud at Customer offering. Oracle Database 19c includes all the latest database innovations, and is the long term support release for Oracle Database 12c Release 2. Oracle also recently shipped the Oracle Exadata Database Machine X8M, which employs Intel® Optane DC persistent memory and innovative database RDMA technologies to deliver up to 20x better latency than All Flash storage arrays.

For the Gartner Operational Database Management Systems Critical Capabilities report, Oracle Database once again ranked No. 1 in all four core operational database use cases: traditional transactions, distributed variable data, event processing/data in motion, and augmented transactions.

Oracle further demonstrates its commitment in continuing to deliver a converged database that makes it easy for developers to build multi-model, data-driven applications. The Oracle Database now includes several different sharding capabilities, enhancing automated data distribution especially important for hybrid cloud or hyperscale requirements.

Oracle Autonomous Database builds on 40 years of experience supporting the world’s most demanding applications. The first-of-its-kind, Oracle Autonomous Database uses groundbreaking machine learning to enable self-driving, self-repairing, and self-securing capabilities with cloud economies of scale and elasticity. The complete automation of database and infrastructure operations like patching, tuning and upgrading, cuts administrative costs, and allows developers, business analysts, and data scientists to focus on getting more value from data and building new innovations.

Download a complimentary copy of Gartner’s 2019 Critical Capabilities for Operational Database Management Systems here.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant for Operational Database Management Systems here.

[1] Source: Gartner, Critical Capabilities for Operational Database Management Systems, Donald Feinberg, Merv Adrian, Nick Heudecker, 25 November 2019.
[2] Source: Gartner, Magic Quadrant for Operational Database Management Systems, Merv Adrian, Donald Feinberg, Nick Heudecker, 25 November 2019.

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

Pages

Subscribe to Oracle FAQ aggregator