Feed aggregator

Oracle Security Training Manuals for Sale

Pete Finnigan - 34 min 20 sec ago
We have one set of Manuals for the recent training we held here in York and one from 2018. These can be bought as individual books as follows: This manual is from the York class in October 2019 and can....[Read More]

Posted by Pete On 19/11/19 At 03:05 PM

Categories: Security Blogs

Urban Leaders Power the Future with Oracle

Oracle Press Releases - 14 hours 25 min ago
Press Release
Urban Leaders Power the Future with Oracle Global Research Highlights Cloud and Advanced Technologies as the Driver of Innovation among Smart Cities

SMART CITIES EXPO WORLD CONGRESS, Barcelona—Nov 20, 2019

Data is at the core of successful smart city innovation, according to new research from Oracle and economic and urban research consultancy ESI ThoughtLab. The Building a Hyperconnected City study found that cities are drowning in data from advancements such as Internet of Things (IoT). The survey projected that there will be more than 30 billion connected devices generating data by 2020. The study notes that for cities to become truly ‘smart’, they must have a cloud infrastructure in place to extract, integrate, and analyze this data to glean the insights needed to enhance everything from citizen services to building projects.

The report surveyed 100 cities across the United States, APAC, EMEA and LATAM.

The hyper-connected multiplier effect

According to the study, the average return on investments in hyper-connected initiatives ranges from three - four percent. As cities become more interlinked, their ROI grows: cities just starting out realize a return of 1.8 percent for implementers and 2.6 percent for advancers, while hyper-connected leaders see a 5.0 percent boost. That can translate into enormous returns ranging from $19.6 million for implementers to $40.0 million for advancers and $83 million for hyper-connected leaders. 

Other key findings from the study include:

  • AI, Blockchain and biometrics are increasingly pervasive: Cities are using these technologies in key urban areas, such as IT infrastructure and telecoms, mobility and transportation, payment and financial systems, and physical and digital security. City leaders need the right technology platforms and applications to implement and leverage these tools and capabilities.
  • Cybersecurity requires careful planning and is expensive when not implemented properly: The study revealed that half of the 100 city leaders surveyed do not feel adequately prepared for cyberattacks.
  • Smart initiatives are bolstering constituent satisfaction: While physical and digital security top the list of priorities, citizen engagement and satisfaction have risen as a top five goal. 33 percent of innovative leaders in North America have appointed Chief Citizen Experience Officers.
 

“The public sector, particularly at local level, is dealing with seismic technological, demographic and environmental shifts. Data is the rocket fuel for this transformation, and progressive cities are turning to cloud, data platforms, mobile applications and IoT as a way to scale and prepare for the future,” said Susan O’Connor, global director for Smart Cities, Oracle. “In contrast, not taking advantage of emerging technologies such as AI, Blockchain or virtual and augmented reality comes at a cost. Cities of the future need strategic, long-term investments in cloud data architecture, along with the right expertise to guide them through.”

Customer Commitment to Smarter Cities:

“As a data driven organization, we integrate, manage and use data to inform how we improve services for our constituents,” said Hamant Bharadia, assistant director of finance at the London Borough of Lambeth. “Oracle Cloud Applications for financial planning and payroll are an integral part of our digital strategy, setting us up for a modern way of working and engaging with our communities. They are an essential enabler for us to support innovation, improve public safety and realize our vision of making Lambeth a connected, inclusive place to thrive.” 

“Approximately 50% of Buenos Aires sidewalks are in poor condition, and we previously used spreadsheets to plan the routes for our crew to fix them,” said Alejandro Naon, chief of staff of planning of the undersecretariat of pedestrian ways, City of Buenos Aires. “Today, with Oracle CX Field Service Cloud, we can identify and fix the sidewalks exponentially faster because we receive images and information in real time. Our sidewalks are safer, our workers are more productive, and we recovered our Oracle technology investment in 18 months.”

“At the foundation of our smart government innovation is Oracle Analytics Cloud. It is both the heartbeat and hub for sharing information, enabling us to deliver data-driven citizen services and engagement with maximum impact,” said Chris Cruz, director and chief information officer, San Joaquin County. “Our entities throughout San Joaquin County, such as hospitals, law enforcement, transportation and public works, now partner more effectively and are better equipped to meet the health, social, safety and economic needs of our constituents.” 

Oracle's Smart City solutions transform the ways cities can harness and process the power of data through the integration of modern digital technologies and channels. The platform integrates technologies spanning cloud, digital outreach, omni-channel service, case management, mobility, social, IoT, Blockchain, and artificial intelligence while helping ensure comprehensive security and information privacy.

For more information, go to https://www.oracle.com/applications/customer-experience/industries/public-sector/

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Oracle Expands Innovation Lab to Advance Industries

Oracle Press Releases - Tue, 2019-11-19 07:00
Press Release
Oracle Expands Innovation Lab to Advance Industries Oracle and partners apply latest technology to help construction, communications, and utility companies spark growth through modernization

Redwood Shores, Calif.—Nov 19, 2019

Oracle is expanding its Chicago Innovation Lab, empowering more organizations to explore new technologies and strategies to bolster their digital transformation efforts. Since its successful launch last year, the Lab has helped construction organizations explore and test solutions from Oracle and the larger construction ecosystem in a simulated worksite environment. Today, Oracle is planning for an extended facility and broadening the scope of the newly named Oracle Industries Innovation Lab to feature additional partners and technologies to solve complex business issues and accelerate customer success across more verticals.

“We are at an inflection point with technology as the digital and physical worlds continue to blur for our customers across all industries,” said Mike Sicilia, senior vice president and general manager, Global Business Units, Oracle. “This expanded Lab environment gives our customers and partners a place to co-innovate with tools and technologies that yield operational improvements and empowers them to use data to create new business opportunities and revenue streams. We’re coming together to help redefine the future for these industries.”

The Lab has already welcomed more than 650 visitors, including best-in-class technology partners, customers and industry thought leaders. There, they have worked together in a realistic worksite environment to test how leading-edge solutions such as connected devices, autonomous vehicles, drones, augmented reality, visualization, and artificial intelligence tools can positively impact the construction industry. Moving forward, the Lab will also feature simulated environments including Utility and Communication solutions.

Oracle Utilities will explore new concepts driving the future of energy. Lab demonstrations and real-world modeling will range from better managing loads on the grid with distributed energy resources, such as solar, wind and electric vehicles; to using artificial intelligence, IoT and digital-twin technologies to improve network operations and speed outage restoration; to optimizing connections with smart home devices to engage and serve customers, while bolstering the health of the grid with better demand planning. The Lab will also highlight how water, gas and electric utilities can leverage the latest technology to manage and enhance their construction efforts and minimize disruptions during site enhancements, maintenance and upgrades. 

Oracle Communications enables both mobile in-app and web-based digital engagement using contextual voice, HD video and screen sharing capabilities through its Oracle Live Experience Cloud. The Oracle Live Experience Cloud directly enables enterprises in the E&C industry to modernize customer experience and field service using enhanced digital engagement channels.

The use cases being demonstrated at the Lab will let customers simulate real-time collaboration on large construction models with massive amounts of data over a high speed, low latency 5G network. 

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+1 312.255.3086
brent.curry@hkstrategies.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1 312.255.3086

Oracle, ITA Announce Wild Card Linkages Between Major College Championships and Oracle Pro Series Events

Oracle Press Releases - Mon, 2019-11-18 07:00
Press Release
Oracle, ITA Announce Wild Card Linkages Between Major College Championships and Oracle Pro Series Events

TEMPE, Ariz.—Nov 18, 2019

Oracle and the Intercollegiate Tennis Association (ITA) jointly announced today the creation of wild card linkages between major college tennis championships and the Oracle Pro Series. The champions and finalists from the Oracle ITA Masters, the ITA All-American Championships and the Oracle ITA National Fall Championships will be awarded wild card entries into Oracle Pro Series events beginning with the 2020 season.

The opportunity to earn wild card entries into Oracle Pro Series tournaments is available to college players from all five divisions (NCAA DI, DII, DIII, NAIA and Junior College). Singles and doubles champions from The All-American Championships and the Oracle ITA National Fall Championships as well as the Oracle ITA Masters singles champions will earn wild cards into Oracle Challenger level events. Singles and doubles finalists from the All-American Championships and the Oracle ITA National Fall Championships will earn wild cards into Oracle $25k tournaments. ITA Cup singles champions (from NCAA DII, DIII, NAIA and Junior College) will also earn wild card entries into Oracle $25K tournaments.

Eighteen individuals and eight doubles teams have already secured wild cards for Oracle Pro Series tournaments in 2020 following their play at the 2019 Oracle ITA Masters, 2019 ITA All-American Championships, 2019 ITA Cup, and 2019 Oracle ITA National Fall Championships. The list includes:

Oracle ITA Masters

  • Men’s Singles Champion – Daniel Cukierman (USC)
  • Men’s Singles Finalist – Keegan Smith (UCLA)
  • Women’s Singles Champion – Ashley Lahey (Pepperdine)
  • Women’s Singles Finalist – Jada Hart (UCLA)

Oracle ITA National Fall Championships

  • Men’s Singles Champion – Yuya Ito (Texas)
  • Men’s Singles Finalist – Damon Kesaris (Saint Mary’s)
  • Women’s Singles Champion – Sara Daavettila (North Carolina)
  • Women’s Singles Finalist – Anna Turati (Texas)
  • Men’s Doubles Champions – Dominik Kellovsky/Matej Vocel (Oklahoma State)
  • Men’s Doubles Finalists – Robert Cash/John McNally (Ohio State)
  • Women’s Doubles Champions – Elysia Bolton/Jada Hart (UCLA)
  • Women’s Doubles Finalists – Anna Rogers/Alana Smith (NC State)

ITA All-American Championships

  • Men’s Singles Champion – Yuya Ito (Texas)
  • Men’s Singles Finalist – Sam Riffice (Florida)
  • Men’s Doubles Champions – Jack Lin/Jackie Tang (Columbia)
  • Men’s Doubles Finalists – Gabriel Decamps/Juan Pablo Mazzuchi (UCF)
  • Women’s Singles Champion – Ashley Lahey (Pepperdine)
  • Women’s Singles Finalist – Alexa Graham (North Carolina)
  • Women’s Doubles Champions – Jessie Gong/Samantha Martinelli (Yale)
  • Women’s Doubles Finalists – Tenika McGiffin/Kaitlin Staines (Tennessee)

ITA Cup

  • Men’s Division II Singles Champion – Alejandro Gallego (Barry)
  • Men’s Division III Singles Champion – Boris Sorkin Tufts)
  • Men’s NAIA Singles Champion – Jose Dugo (Georgia Gwinnett)
  • Men’s Junior College Singles Champion – Oscar Gabriel Ortiz (Seward County)
  • Women’s Division II Singles Champion – Berta Bonardi (West Florida)
  • Women’s Division III Singles Champion – Justine Leong (Claremont-Mudd-Scripps)
  • Women’s NAIA Singles Champion – Elyse Lavender (Brenau)
  • Women’s Junior College Singles Champion – Tatiana Simova (ASA Miami)

“This is yet another exciting step forward for all of college tennis as we build upon our ever-growing partnership with Oracle,” said ITA Chief Executive Officer Timothy Russell. “We are forever grateful to our colleagues at Oracle for both their vision and execution of these fabulous opportunities.”

Oracle is partnering with InsideOut Sports & Entertainment, led by former World No. 1 and Hall of Famer Jim Courier and his business partner Jon Venison, to manage the Oracle Pro Series. InsideOut will work with the college players and their respective coaches to coordinate scheduling in respect to their participation in the Pro Series events.

The final schedule for the 2020 Oracle Pro Series will include more than 35 tournaments, most of which will be combined men’s and women’s events. Dates and locations are listed at https://oracleproseries.com/. Follow on social media through #OracleProSeries.

The expanding partnership between Oracle and the ITA builds upon their collaborative efforts to provide playing opportunities and their goal of raising the profile of college tennis and the sport in general. Oracle supports collegiate tennis through sponsorship of the ITA, including hosting marquee events throughout the year such as the Oracle ITA Masters and the Oracle ITA Fall Championships.

Through that partnership, the ITA has been able to showcase its top events to a national audience as the Oracle ITA Masters, ITA All-American Championships and Oracle ITA National Fall Championships singles finals have been broadcast live with rebroadcasts on the ESPN family of networks.

Contact Info
Mindi Bach
Oracle
650.506.3221
mindi.bach@oracle.com
Al Barba
ITA
602-687-6379
abarba@itatennis.com
About the Intercollegiate Tennis Association

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men’s and women’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men’s and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle Tennis

Oracle is committed to supporting American tennis for all players across the collegiate and professional levels. Through sponsorship of tournaments, players, ranking, organizations and more, Oracle has infused the sport with vital resources and increased opportunities for players to further their careers. For more information, visit www.oracle.com/corporate/tennis/. Follow @OracleTennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Mindi Bach

  • 650.506.3221

Al Barba

  • 602-687-6379

New Study: Only 11% of Brands Can Effectively Use Customer Data

Oracle Press Releases - Mon, 2019-11-18 07:00
Press Release
New Study: Only 11% of Brands Can Effectively Use Customer Data Independent study highlights the challenges of bringing together different data types to create a unified customer profile

Redwood Shores, Calif.—Nov 18, 2019

Despite all the hype around customer data platforms (CDPs), a new study conducted by Forrester Consulting and commissioned by Oracle found that brands are struggling to create a unified view of customers. The November 2019 study, “Getting Customer Data Management Right,” which includes insights from 337 marketing and advertising professionals in North America and Europe, found that brands want to unify customer data but face significant challenges in bringing different data types together. 

Brands Want to Centralize Customer Data

As consumers expect more and more personalized experiences, the ability to effectively leverage customer data is shifting from a “nice-to-have” to table stakes:

  • 75% of marketing and advertising professionals believe the ability to “improve the experience of our customers” is a critical or important objective when it comes to the use of customer engagement data.
  • 69% believe it is important to create a unified customer profile across channels and devices.
  • 64% stated that they adopted a CDP to develop a single source of truth so they could understand customers better.

Unified Customer Profiles Lead to Better Business Results

Brands that effectively leverage unified customer profiles are more likely to experience revenue growth, increased profitability and higher customer lifetime values:

  • Brands that use CDPs effectively are 2.5 times more likely to increase customer lifetime value.
  • When asked about the benefits of unified data management, the top two benefits were increased specific functional effectiveness (e.g., advertising, marketing, or sales) and increased channel effectiveness (e.g., email, mobile, web, social media).

The Marketing and Advertising Opportunity

While marketing and advertising professionals understand the critical role unified customer profiles play in personalizing the customer experience, the majority of brands are not able to effectively use a wide variety of data types:

  • 71% of marketing and advertising professionals say a unified customer profile is important or critical to personalization.
  • Only 11% of brands can effectively use a wide variety of data types in a unified customer profile to personalize experiences, provide a consistent experience across channels, and generally improve customer lifetime value and other business outcomes.
  • 69% expect to increase CDP investments at their organization over the next two years.

“A solid data foundation is the most fundamental ingredient to success in today’s Experience Economy, where consumers expect relevant, timely and consistent experiences,” said Rob Tarkoff, executive vice president and general manager, Oracle CX. “At Oracle we have been helping customers manage, secure and protect their data assets for more than 40 years, and this unique experience puts us in the perfect position to help brands leverage all their customer data – digital, marketing, sales, service, commerce, financial and supply chain – to make every customer interaction matter.” 

Read the full study here.

Contact Info
Kim Guillon
Oracle
+1.209-601-9152
kim.guillon@oracle.com
Methodology

Forrester Consulting conducted an online survey of 337 professionals in North America and Europe who are responsible for customer data, marketing analytics, or marketing/advertising technology. Survey participants included decision makers director level and above in marketing or advertising roles. Respondents were offered a small incentive as a thank you for time spent on the survey. The study began in August 2019 and was completed in September 2019.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kim Guillon

  • +1.209-601-9152

Parse Time

Jonathan Lewis - Sun, 2019-11-17 13:37

This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.

Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.

I was prompted a  couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used.  The default for “too long” can be adjusted by setting a “fix control”.  This makes it a lot easier to find out where the time went if you see a very long parse time.

But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty  performance issue.

=============================================================================================

 

What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     46.27      46.53          0          5          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.33       0.63        129      30331          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     46.60      47.17        129      30336          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144  
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us)
       863        863        863   VIEW  VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1)
       ... and lots more lines of plan

According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.

Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE 
  NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') 
  NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), 
  NVL(SUM(C2),:"SYS_B_01") 
FROM
 (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") 
  NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, 
  CASE WHEN
    ...
  END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" 
  SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" 
  WHERE ... 
 ) SAMPLESUB
 
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch        5     21.41      24.14      11108      37331          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15     21.41      24.15      11108      37331          0           5
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144     (recursive depth: 1)
Number of plan statistics captured: 3
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us)
   3137126    3137126    3137126   PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264)
   3137126    3137126    3137126    TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)

This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.

This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.

The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.

Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.

My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).

Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:

[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc  | wc -l
6266

That’s 6,266 join orders considered – let’s take a slightly closer look:

[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt
[jonathan@linux01 big_trace]$ tail -2 temp.txt
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...... from$_subquery$_008[TBL_000020]#2
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):

Number of join permutations tried: 581

So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.

[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc
11495:Number of join permutations tried: 2
11849:Number of join permutations tried: 1
12439:Number of join permutations tried: 2
13826:Number of join permutations tried: 2
14180:Number of join permutations tried: 1
14552:Number of join permutations tried: 2
15938:Number of join permutations tried: 2
16292:Number of join permutations tried: 1
16665:Number of join permutations tried: 2
18141:Number of join permutations tried: 2
18550:Number of join permutations tried: 2
18959:Number of join permutations tried: 2
622799:Number of join permutations tried: 374
624183:Number of join permutations tried: 2
624592:Number of join permutations tried: 2
624919:Number of join permutations tried: 1
625211:Number of join permutations tried: 2
1759817:Number of join permutations tried: 673
1760302:Number of join permutations tried: 1
1760593:Number of join permutations tried: 2
1760910:Number of join permutations tried: 1
1761202:Number of join permutations tried: 2
2750475:Number of join permutations tried: 674
2751325:Number of join permutations tried: 2
2751642:Number of join permutations tried: 1
2751933:Number of join permutations tried: 2
2752250:Number of join permutations tried: 1
2752542:Number of join permutations tried: 2
3586276:Number of join permutations tried: 571
3587133:Number of join permutations tried: 2
3587461:Number of join permutations tried: 1
3587755:Number of join permutations tried: 2
3588079:Number of join permutations tried: 1
3588374:Number of join permutations tried: 2
4458608:Number of join permutations tried: 581
4458832:Number of join permutations tried: 1

The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:

4452004:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4452086:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453254:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453382:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454573:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454655:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455823:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455905:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4457051:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:

****** Recost for ORDER BY (using join row order) *******

The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:

[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c
    452 ****** Recost for ORDER BY (using index) ************
   2896 ****** Recost for ORDER BY (using join row order) *******

So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).

We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.

[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l
43989

Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)

We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:

       Cost: 206984.61  Degree: 1  Resp: 206984.61  Card: 0.00 Bytes: 632
***********************
Best so far:  Table#: 0  cost: 56.9744  card: 1.0000  bytes: 30
              Table#: 3  cost: 59.9853  card: 0.0000  bytes: 83
              Table#: 6  cost: 60.9869  card: 0.0000  bytes: 151
              Table#:10  cost: 61.9909  card: 0.0000  bytes: 185
              Table#: 5  cost: 62.9928  card: 0.0000  bytes: 253
              Table#: 2  cost: 65.0004  card: 0.0000  bytes: 306
              Table#: 1  cost: 122.4741  card: 0.0000  bytes: 336
              Table#: 8  cost: 123.4760  card: 0.0000  bytes: 387
              Table#: 4  cost: 125.4836  card: 0.0000  bytes: 440
              Table#: 7  cost: 343.2625  card: 0.0000  bytes: 470
              Table#: 9  cost: 345.2659  card: 0.0000  bytes: 530
              Table#:11  cost: 206981.5979  card: 0.0000  bytes: 564
              Table#:12  cost: 206982.6017  card: 0.0000  bytes: 598
              Table#:13  cost: 206984.6055  card: 0.0000  bytes: 632
***********************

As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:

{
        if (index($0,"Best so far") != 0) {print NR m2}
        m2 = m1; m1 = $0;
}

awk -f cost.awk orcl_ora_25306.trc >temp.txt

There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.

Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far.  I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula.  The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.

At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).

However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.

The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.

QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
***********************************
Cost-Based Join Predicate Push-down
***********************************
JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4)
JPPD:   Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6)
Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$799AD133 (#3)
JPPD:   Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7)
Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2)
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3)
Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4)
Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5)
Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5)
JPPD:     Passed validity checks
JPPD: JPPD:   Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks.
Join-Predicate push-down on query block SEL$C2AA4F6A (#2)
JPPD: Using search type: linear
JPPD: Considering join predicate push-down
JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)

As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”

[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc
9934:GBP: Starting iteration 1, state space = (20,21) : (0,0)
11529:GBP: Starting iteration 2, state space = (20,21) : (0,C)
11562:GBP: Starting iteration 3, state space = (20,21) : (F,0)
12479:GBP: Starting iteration 4, state space = (20,21) : (F,C)
12517:GBP: Starting iteration 1, state space = (18,19) : (0,0)
13860:GBP: Starting iteration 2, state space = (18,19) : (0,C)
13893:GBP: Starting iteration 3, state space = (18,19) : (F,0)
14587:GBP: Starting iteration 4, state space = (18,19) : (F,C)
14628:GBP: Starting iteration 1, state space = (16,17) : (0,0)
15972:GBP: Starting iteration 2, state space = (16,17) : (0,C)
16005:GBP: Starting iteration 3, state space = (16,17) : (F,0)
16700:GBP: Starting iteration 4, state space = (16,17) : (F,C)
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)

There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”


16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622799:Number of join permutations tried: 374

622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759817:Number of join permutations tried: 673

1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750475:Number of join permutations tried: 674

2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
3586276:Number of join permutations tried: 571

4458608:Number of join permutations tried: 581

At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.

The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.

Version 12c and beyond

That was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.

Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:


$ grep -nT TIMER or19_ora_21051.trc

16426  :TIMER:      bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec
252758 :TIMER:     costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec
309460 :TIMER:      bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec
312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec
313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec
315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec
316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec
318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec
318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec
328948 :TIMER:      bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec
632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec
633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec

The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:


$ grep -nT TIMER or19_ora_22205.trc
14884  :TIMER:      bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec
15228  :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec
15234  :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec

Not only was optimisation faster, the runtime was quicker too.

Warning – it’s not always that easy.

 

Library Cache Stats

Jonathan Lewis - Sun, 2019-11-17 03:36

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/

create or replace package body snap_libcache as

cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is
begin

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is
begin

m_end_time := sysdate;

dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;

dbms_output.put_line(‘———————————‘);

dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);

dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;

end loop;

end end_snap;

begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;

end snap_libcache;
/

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

Iconic South African Retailer Boosts Agility with Oracle

Oracle Press Releases - Thu, 2019-11-14 08:00
Press Release
Iconic South African Retailer Boosts Agility with Oracle Retail powerhouse Cape Union Mart International goes to the cloud to accelerate growth

REDWOOD SHORES, Calif. and CAPE TOWN, South Africa—Nov 14, 2019

Outdoor and Fashion retailer and manufacturer, Cape Union Mart International Pty Ltd, Inc. has selected Oracle to modernize its retail operations. With the Oracle Retail Cloud, the company plans to fuel growth across all sales channels with better inventory visibility and more sophisticated merchandise assortments that keep shoppers coming back for more.

“This is a complex project, touching virtually every part of our business. The Oracle team has partnered with us from start to finish; building our trust and giving us an insight into what we can expect in the implementation of the transformational project – we look forward to working with them and rebuilding our retail IT landscape into a world class environment, taking Cape Union Mart to the next level,” said Grant De Waal-Dubla, Group IT Executive, Cape Union Mart.

Cape Union Mart strives to deliver what their customers need with the right product in the right store at the right time. Until now, the brand has managed its retail assortments with a talented team and well-defined process in excel spreadsheets. As Cape Union Mart continued to grow, they needed a better way to manage their operations. With Oracle Retail, the brand can fully embrace automated, systemized workflows driven by dashboards and end to end reporting with a common user interface. This will lead to more seamless fulfillment and accurate demand forecasts.

“By choosing Oracle, Cape Union Mart can focus on business objectives and results, not technology. As a cloud provider, we take great pride in building appropriate real-time integration across the Oracle portfolio so our customers can get the information and results they needed quickly – whether that’s moving existing inventory or anticipating next seasons fashion trends and ensuring they are available for customers,” said Mike Webster, senior vice president and general manager, Oracle Retail.

Cape Union Mart International Pty Ltd will implement several solutions in the Oracle Retail modern platform including Oracle Retail Merchandising Cloud Service, Oracle Retail Allocation Cloud Service, Oracle Retail Pricing Cloud Services, Oracle Retail Invoice Matching Cloud Service, Oracle Retail Integration Cloud Services, Oracle Retail Merchandise Financial Planning Cloud Services, Oracle Retail Assortment and Item Planning Cloud Service, Oracle Retail Science Platform Cloud Services, Oracle Retail Demand Forecasting Cloud Service, Oracle Retail Store Inventory Operations Cloud Service, Oracle Middleware Cloud Services, Oracle Warehouse Management Cloud and Oracle Financials Cloud. Cape Union Mart has partnered with Oracle Retail Consulting for the implementation.

Contact Info
Kaitlin Ambrogio
Oracle PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Oracle Press Releases - Thu, 2019-11-14 07:00
Press Release
Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Redwood Shores, Calif.—Nov 14, 2019

Oracle today announced that Oracle Cloud Applications has achieved Impact Level 4 (IL4) Provisional Authorization from the Defense Information Systems Agency (DISA) and the U.S. Department of Defense (DoD). With IL4, Oracle can now offer its software-as-a-service (SaaS) cloud suite to additional government agencies within the DoD community. Since the authorization was granted, the DoD has selected Oracle Human Capital Management (HCM) Cloud to help transform its HR operations in support of 900,000 civilian employees.

All organizations need comprehensive and adaptable technology to stay ahead of changing business and technology demands. For federal government agencies in particular, it’s even more critical to have a reliable, highly secure solution to navigate time-sensitive workflows and make strategic mission decisions. To meet these demands, Oracle Cloud Applications enables customers to benefit from best-in-class functionality, robust security, high-end scalability, mission-critical performance, and strong integration capabilities.

“At Oracle, our focus is centered on our customers’ needs. For U.S. Federal and Department of Defense customers, they need best in class, agile, and secure software to run their operations – and we can deliver that,” said Mark Johnson, SVP, Oracle Public Sector. “With built-in support for Impact Level 4, the DoD community can now take advantage of Oracle Cloud Applications to break down silos, quickly and easily embrace the latest innovations, and improve user engagement, collaboration, and performance.”

“The Department of Defense awarded a contract to Oracle HCM Cloud to support its enterprise human resource portfolio. The award modernizes its existing civilian personnel business process functions to enable improved streamlined approaches in support of the workforce. The DoD's Defense Manpower Data Center is leading the implementation of the HCM Cloud, which replaces numerous legacy systems and is targeted for full deployment in mid 2020,” according to the DMDC Director, Michael Sorrento.

Oracle has been a long-standing strategic technology partner of the US government, including the Central Intelligence Agency (CIA), the first customer to use Oracle’s flagship database software 35 years ago. Today, more than 500 government organizations take advantage of Oracle’s industry-leading technologies and superior performance.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

Excel Average Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-11-13 10:33

Calculating average when you only have a few entries in your data is one thing but having to do the same with hundreds of data entries is a whole another story. Even using a calculator for finding the average of this many numbers can be highly time-consuming and to be honest, quite frustrating. After all, […]

The post Excel Average Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

Oracle Press Releases - Wed, 2019-11-13 08:00
Blog
The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

By Guest Author, Oracle—Nov 13, 2019

The declining bee population is not just a problem for honey lovers; it’s a threat to the global food supply.

Oracle announced a partnership with The World Bee Project CIC in 2018, offering the use of its cloud storage and AI analytics tools to support the organization’s goals and innovations such as its BeeMark honey certification.

The World Bee Project is the first private organization to launch a global honeybee monitoring initiative to inform and implement actions to improve pollinator habitats, create more sustainable ecosystems, and improve food security, nutrition, and livelihoods by establishing a globally coordinated monitoring program for honeybees and eventually for key pollinator groups.

The World Bee Project Hive Network remotely collects data from varying environments through interconnected hives equipped with commercially available IoT sensors. The sensors combine colony-acoustics monitoring with other parameters such as brood temperature, humidity, hive weight, and apiary weather conditions. They also monitor and interpret the sound of a bee colony to assess colony behavior, strength, and health.

The World Bee Project Hive Network’s multiple local data sources provide a far richer view than any single data source to harness and enable global-scale computation to generate new insights into declining pollinator populations.

After the data has been validated by The World Bee Project database it can be fed into Oracle Cloud, which uses analytics tools including AI and data visualization to provide The World Bee Project with new insights into the relationship between bees and their varying environments. These new insights can be shared with smallholder farmers, scientists, researchers, governments, and other stakeholders.

“The partnership with Oracle will absolutely transform the scene as we can link AI with pollination and agricultural biodiversity,” said Sabiha Malik, founder and executive president of The World Bee Project CIC. “We have the potential to help transform the way the world grows food and to protect the livelihoods of hundreds of millions of smallholder farmers, but we depend entirely on stakeholders such as banks, agritech, insurance companies, and governments to sponsor and invest in our work so that we can begin to step toward fulfilling our mission.”

Oracle will be offering cloud computing technology and analytics tools to The World Bee Project to enable it to process data in collaboration with its science partner, the University of Reading, to enable science-based evidence to emerge.

Oracle is currently looking at funding models to support the expansion of The World Bee Project Hive Network to ensure a truly global view of the health of bee populations.

Watch The World Bee Project Video to Learn More


 

Read More Stories from Oracle Cloud

The World Bee Project is one of the thousands of innovative customers succeeding in the cloud. Read about others in Stories from Oracle Cloud: Business Successes

nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

David Kurtz - Tue, 2019-11-12 13:12
I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.
Problem nVision SQLThis is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().
SELECT SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L2, PSTREESELECT10 L1
WHERE A.LEDGER='ACTUAL' AND A.FISCAL_YEAR=2018 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 8
AND L2.SELECTOR_NUM=159077 AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.BUSINESS_UNIT='10000')
AND L1.SELECTOR_NUM=159075 AND A.DEPTID=L1.RANGE_FROM_10
AND A.CURRENCY_CD='GBP' AND A.STATISTICS_CODE=' '
Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.
Normal nVision SQLThis is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_UKMGT'
AND A.FISCAL_YEAR=2018
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
The BugThis Oracle note details an nVision bug:
"UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
Issue was introduced on build 8.55.19.
Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
Issue was introduced on builds 8.55.22 & 8.56.07.
Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."
In summary
  • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
  • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

Basic Replication -- 11 : Indexes on a Materialized View

Hemant K Chitale - Tue, 2019-11-12 08:46
A Materialized View is actually also a physical Table (by the same name) that is created and maintained to store the rows that the MV query is supposed to present.

Since it is also a Table, you can build custom Indexes on it.

Here, my Source Table has an Index on OBJECT_ID :

SQL> create table source_table_1
2 as select object_id, owner, object_name
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_1
2 add constraint source_table_1_pk
3 primary key (object_id)
4 /

Table altered.

SQL> create materialized view log on source_table_1;

Materialized view log created.

SQL>


I then build Materialized View with  an additional Index on it :

SQL> create materialized view mv_1
2 refresh fast on demand
3 as select object_id as obj_id, owner as obj_owner, object_name as obj_name
4 from source_table_1
5 /

Materialized view created.

SQL> create index mv_1_ndx_on_owner
2 on mv_1 (obj_owner)
3 /

Index created.

SQL>


Let's see if this Index is usable.

SQL> exec  dbms_stats.gather_table_stats('','MV_1');

PL/SQL procedure successfully completed.

SQL> explain plan for
2 select obj_owner, count(*)
3 from mv_1
4 where obj_owner like 'H%'
5 group by obj_owner
6 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2523122927

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 15 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 2 | 10 | 15 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | MV_1_NDX_ON_OWNER | 5943 | 29715 | 15 (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("OBJ_OWNER" LIKE 'H%')
filter("OBJ_OWNER" LIKE 'H%')



Note how this Materialized View has a column called "OBJ_OWNER"  (while the Source Table column is called "OWNER") and the Index ("MV_1_NDX_ON_OWNER") on this column is used.


You  would have also noted that you can run DBMS_STATS.GATHER_TABLE_STATS on a Materialized View and it's Indexes.

However, it is NOT a good idea to define your own Unique Indexes (including Primary Key) on a Materialized View.  During the course of a Refresh, the MV may not be consistent and the Unique constraint may be violated.   See Oracle Support Document # 67424.1



Categories: DBA Blogs

Oracle Introduces Cloud Native Modern Monetization

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Oracle Introduces Cloud Native Modern Monetization Cloud native deployment option gives market leaders the agility to embrace 5G, IoT and future digital business models

Redwood Shores, Calif.—Nov 12, 2019

Digital service providers are transforming their monetization systems to prepare for the upcoming demands of 5G and future digital services. Oracle Communications’ new cloud native deployment option for Billing and Revenue Management (BRM) addresses these demands by combining the features and extensibility of a proven, convergent charging system with the efficiency of cloud and DevOps agility.

Oracle Communications’ cloud native BRM deployment option provides a modern monetization solution to capitalize on the opportunities presented by today’s mobile, fixed and cable digital services. It supports any service, industry or partner-enabled business model and provides a foundation for 5G network slicing and edge monetization.

“As the telecommunications industry prepares itself to take advantage of 5G, architectural agility will be essential to monetize next-generation services quickly and efficiently,” added John Abraham, principal analyst, Analysys Mason. “With its cloud native compliant, microservices-based architecture framework, the latest version of Oracle’s Billing and Revenue Management solution is well positioned to accelerate CSPs ability to support emerging 5G-enabled use cases.“

Cloud native BRM enables internal IT teams to incorporate DevOps practices to more quickly design, test and deploy new services. Organizations can optimize their operations by seamlessly managing business growth with efficient scaling and simplified updates, and by taking advantage of deployment in any public or private cloud infrastructure environment. BRM further increases IT agility when deployed on Oracle’s next generation Cloud Infrastructure, which features autonomous capabilities, adaptive intelligence and machine learning cyber security.

“Service providers and enterprises are looking for agile solutions to quickly monetize 5G and IoT services,” said Jason Rutherford, senior vice president and general manager, Oracle Communications. “Cloud native BRM deployed on Oracle Cloud Infrastructure allows our customers to operate more efficiently, react quickly to competition and to pioneer new price plans and business models that capitalize on the digital revolution.”

Find out more about Oracle Communications Billing and Revenue Management, with modern monetization capabilities for 5G and the connected digital world. 

To learn more about Oracle Communications industry solutions, visit: Oracle Communications, LinkedIn, or join the conversation at Twitter @OracleComms.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Spain’s New York Burger Delivers Sizzling Service with Oracle

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Spain’s New York Burger Delivers Sizzling Service with Oracle Restaurant sees 50 percent decrease in customer wait times with Oracle MICROS

Redwood Shores, Calif.—Nov 12, 2019

New York Burger set out to shake up the local food scene by bringing American style burgers and barbeque dishes to Madrid. Today, the fast growing chain is doing exactly that. To keep up with the pace of expansion while keeping customers happy, New York Burger has added Oracle MICROS Simphony Point of Sale (POS) System to its technology menu to seamlessly connect servers and the kitchen. With real-time order sharing, cooks can immediately start an order, reducing the time it takes orders to arrive to hungry diners. Since deploying the Oracle Cloud solution, the chain has realized a 50 percent decrease in customer wait-time across its five restaurants.

“As the business grew, we found our existing solution was not up to the challenge, and inefficiencies meant our customers were kept waiting,” said Pablo Colmenares, founder, New York Burger. “Oracle has definitely helped us to streamline our operations. It is simple and fast to use, and utilizing the product helped us become a smarter business. Oracle has a great global reputation, there’s a reason why the biggest brands in the world trust Oracle. Every strong tree needs strong roots and Oracle is our roots.”

Along with improving service efficiencies, Oracle MICROS Simphony POS System has helped New York Burger streamline menu management, gaining immediate data and reporting on their customers’ favorite menu items. These insights have been especially helpful as the restaurant chain has revamped its menu to better match customers’ preferences, removing items that were not popular and reducing food waste.  

New York Burger has also relied on Oracle’s solutions to further its green-friendly approach to operating its restaurants, enabling them to reduce waste and more closely align with its goal of being an environmentally-friendly restaurant. Oracle’s solution specifically helps management minimize excess costs, by reducing any unnecessary ingredient surplus.

“This innovative chain took a chance on bringing a new kind of cuisine to Madrid – to rave reviews. But today, the quality of the experience customers have at a restaurant must be in parallel with the quality of the food,” said Simon de Montfort Walker, senior vice president and general manager for Oracle Food and Beverage. “With Oracle, New York Burger is able to speed service and give servers more time with customers - delivering an unforgettable meal on both sides of the equation. And with better insights into tastes, trends and what’s selling well, New York Burger can reduce waste and conserve revenue while giving customers a menu that will keep them coming back again and again.”

Please view New York Burger’s video: New York Burger Delivers Joyful Food Sustainably with Oracle

Contact Info
Katie Barron
Oracle
+1-202-904-1138
katie.barron@oracle.com
Scott Porter
Oracle
+1-650-274-9519
scott.c.porter@oracle.com
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1-202-904-1138

Scott Porter

  • +1-650-274-9519

Oracle Cloud's Competitive Advantage

Oracle Press Releases - Tue, 2019-11-12 06:00
Blog
Oracle Cloud's Competitive Advantage

By Steve Daheb, Senior Vice President, Oracle Cloud—Nov 12, 2019

I just got back from a press tour in New York, where the most common question I heard was: What's Oracle's competitive advantage in the cloud? I believe it's the completeness of our offering. Here's why.

The three main components of the cloud are the application, platform, and infrastructure layers. But most enterprises don't think about the cloud in terms of these silos. They take a single, holistic view of their problems and how to solve them.

Because we play in all layers of the cloud—and are continually adding integrations between the layers—we are in a unique position to help.

The application layer refers to software such as enterprise resource planning, human capital management, supply chain management, and customer engagement. These are core applications that enterprises rely on to run their businesses. Oracle is the established leader in this area, and we're continuing to innovate and differentiate by integrating artificial intelligence, blockchain, and other important new technologies into these applications.

These applications sit on the platform layer, which is powered by the Oracle Autonomous Database. We've taken our 40-plus years of expertise and combined it with advanced machine learning technologies to create the market's only self-driving and self-repairing database.

The platform layer is also where our analytics, security, and integration capabilities live. Analytics are helping businesses answer questions they couldn't answer before—and ask new questions they never would have thought of. And security, which used to be seen as an inhibitor to cloud adoption, is actually now a driver. Enterprises are saying, "Oracle's data center is going to be more secure than what we can manage on our own."

The application and platform layers rest upon Oracle's Generation 2 Cloud Infrastructure. Our compute, storage, and networking capabilities are purpose-built to run new types of workloads in a more secure and performant way than our competitors. We plan to open 20 Oracle Cloud data centers by the end of next year, which works out to one new data center every 23 days. And we're hiring 2,000 new people to support this infrastructure business.

Another differentiator for Oracle is our commitment to openness and interoperability in the cloud. As an example, we have a very strategic relationship with Microsoft. Joint customers can migrate to the cloud, build net new applications and even do things like run Microsoft analytics on top of an Oracle Database. We've also announced a collaboration with VMware to help customers run vSphere workloads in Oracle Cloud and to support Oracle software running on VMware.

We live in a hybrid and multicloud world. Oracle's comprehensive cloud offering, combined with our interoperability and multicloud support, helps customers achieve outcomes they simply couldn't with other vendors.

Watch Steve Daheb discuss the Oracle Cloud advantage on Cheddar and Yahoo Finance.

GRDF Reaches Four Million Smart Meter Milestone with Oracle

Oracle Press Releases - Tue, 2019-11-12 05:00
Press Release
GRDF Reaches Four Million Smart Meter Milestone with Oracle Leading French gas distributor continues natural gas expansion with world’s largest smart meter roll-out, expected to reach 11 million households by 2023

EUROPEAN UTILITY WEEK, Paris—Nov 12, 2019

Leading French DSO GRDF has rolled out more than four million smart meters, powered by Oracle Meter Data Management (MDM). This milestone is part of GRDF’s larger smart meter initiative that is on track to reach 11 million households by 2023. With this program, GRDF can further realize its vision of Improving energy management and enhancing customer satisfaction. GRDF serves 90 percent of France’s gas market.

“The move to smart meters and implementation of new digitized functionalities are critical to delivering a natural gas network that fosters energy transition for our territories,” said Vincent PERTUIS, GRDF Director for Smart Gas Metering Program. “With Oracle MDM, GRDF will be able to use data to continue to reimagine how we serve customers, accelerate decarbonization, and increase the flexibility and reliability of our network.” 

Using Oracle Utilities Meter Data Management (MDM), GRDF is modernizing its natural gas transmission network to make it an effective tool for the energy transition. The result will be a fully digitized and connected network that will deliver benefits to customers and the environment by integrating renewable gas, enhancing safety, providing data to better manage gas supply, and linking with other networks to enhance flexibility and storage capacity. 

The smart meter roll-out will provide GRDF with massive amounts of interval meter data that will be essential to running a more efficient, cleaner network. Oracle Utilities MDM helps energy providers not only capture the data but securely optimise its use and management to support core operations and fuel innovation.

“GRDF’s smart grid modernization project is the largest in the world and hitting the four million smart meter marker represents tremendous progress,” said Francois Vazille, vice president of JAPAC & EMEA, Oracle Utilities. “Oracle MDM is a critical component to GRDF’s digital transformation journey and in opening up new opportunities for GRDF to serve its customers with clean, reliable energy.”

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

ā·pěks 10 Years Later

Joel Kallman - Tue, 2019-11-12 03:49


Exactly 10 years ago today, I wrote a succinct blog post with the intent of clarifying how to properly pronounce and abbreviate Oracle APEX.  I decided to use the phonetic spelling, ā'pěks, to avoid all ambiguity with the pronunciation.  Was I successful?

  • I still encounter many people who spell this Apex (and not the correct APEX)
  • I routinely hear people pronounce this as ah·pěks or ap·ěks (and not the correct ā'pěks)

Obviously, we still have a ways to go.  However, this hasn't been a complete loss.  With many thanks to the global APEX community, this simple phonetic spelling has resulted in:
...and more.  And did I say stickers?

What I especially love is that all of this was created by the Oracle APEX community.  Instead of Oracle providing merchandise and branding for Oracle APEX, the community embraced this and ran with it themselves.  This has been wonderfully organic and authentic, and completely community-driven.

Going forward, if you come across someone who misspells or mispronounces Oracle APEX, please feel free to direct them to this blog post.  It is:

Oracle APEX

and it's pronounced ā·pěks.

Joining two tables with time range

Tom Kyte - Mon, 2019-11-11 11:49
Dear AskTom-Team! I wonder whether it is possible to join two tables that have time ranges. E.g a table 'firmname' holds the name of a firm with two columns from_year and to_year that define the years the name is valid. Table 'address' holds the a...
Categories: DBA Blogs

TDE Encryption of local Oracle databases. KEK hosted on cloud service?

Tom Kyte - Mon, 2019-11-11 11:49
Hi, We want to encrypt some on-premise Oracle databases. If possible, we would like to avoid to use a physical HSM or to contract with a third party HSM cloud provider. Is this possible to store the KEK's in GCP or Azure, and to interface our lo...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator