Craig Shallahamer

President & Founder, OraPub Inc. Oracle Ace Director

OraPub, Inc.

Biography

Craig is a long time Oracle DBA who specializes in Oracle tuning and machine learning and started the OraPub website in 1995. Craig is a performance researcher and blogger, consultant, author of two books, an enthusiastic conference speaker and a passionate teacher to thousands of Oracle professionals. He clearly pushes the teaching envelope with his performance-focused membership program, webinars and videos! Craig has received a number of technical, effectiveness and community involvement awards. Craig is also an Oracle ACE Director.

Papers

How To Build A Performance Indicator Using Machine Learning (Connect 2019)

Stream:DBAs; Developers; Managers

Both a time based (think AWR) and a sample based (think ASH) performance analysis provide the Oracle performance specialist with an incredibly rich data set and structured analysis possibilities. A DBA/Developer proficient in both AWR and ASH analysis is highly valuable in today's market. However, complex performance situations are not always quickly diagnosable using an AWR data time based analysis or by an ASH incident analysis. It's because situations are sometimes a complex mix of highly fragmented time classifications and a changing workload mix. This makes diagnosis more time consuming and complex. And certainly, the diagnosis will not be completed the second the SLA breach occurs! A solution is to use an unsupervised classification machine learning (UML) model to immediately tag, for example, a snapshot duration as bad, ok, or wonderful. A UML predictive model has the ability to combine a virtually unlimited number of performance related characterists into a small set of classes, such as green, yellow, or red. In summary, this is done by tagging a few of the known problem time samples as "bad", training the UML model, then noting the assigned classification for the "bad" samples. Then we integrate a new sample into the model and observe how the model classifies it. If the new sample class is "bad" then we have identified a potential performance situation. Now, any number of actions can occur, such as an email, text, flashing red lights, sirens, etc. This "new sample integration and alert" process can occur in less than a second, when automated. It happens well before the phone rings or a ticket is submitted, enhancing IT business operational efficiency. Join me as I demonstrate how a performance focused Oracle DBA/Developer can use machine learning to their advantage!

Powerful: Practical Machine Learning In The Hands Of The DBA/Developer (Connect 2019)

Stream:DBAs; Developers; Managers

Powerful Machine Learning (ML) is now available to everyone. Over the past few years, a number of changes have occurred making this possible. As Oracle DBAs/Developers, it is now our turn to leverage ML, enabling us to do things never before possible. But it's not that simple. There is a lot to learn. Everything from methodology to math to tools. And then the nagging question of, as a DBA/Developer is there a real application in my work? The good news is, I have been able to leverage aspects of ML for many years. But now is the time to push all that ML has to offer into our work. And in our work, we can do things never before possible. It's taking the DBA/Developer to a whole new level. So, in this presentation I'm going to introduce you to the world of ML from a DBA/Developer perspective. This includes understanding what ML is, why use it and why now. The ah-ha moment will occur when you learn that there are many algorithms within the ML umbrella to choose from. I'll pick two of algorithms, demonstrating how I used them to answer difficult DBA/Developer type questions. One will be at a high level and the other I'll walk you step-by-step through the process. Now is the time for Oracle DBAs/Developers to step into the world of ML, because it is powerful, practical and enables us to do things we have never been able to do before.

How To Analyze SQL Run Times By Plan Using ASH Data (Connect 2019)

Stream:DBAs; Developers; Managers

Imagine knowing SQL run times without instrumenting your application. That is exactly what I will teach you how to do by analyzing ASH data. Suppose the ticket shows, "This query takes 45 seconds!" How can you confirm that? Is 45 seconds unusual? Has it happened before? Perhaps there is a bad plan being used? What is a good plan for the SQL? This is valuable information that creatively analyzing ASH data will reveal. In this presentation, using ASH data, I will show you how to manually infer SQL run times. Then I'll show you how to use a simple yet flexible SQL script to analyze ASH data, infer SQL run times and report the results... even at the execution plan level. Next I'll show you how to analyze the run time samples using the free statistical package R. Join me as we explore the untapped analysis opportunities ASH data provides.