Feed aggregator

Running x86_64 Docker Images on Mac M1 Max (Oracle Database 19c)

DBASolved - Thu, 2022-09-22 14:42

  A few months ago, I wrote a post about me switching back to Windows (here).  I can say I […]

The post Running x86_64 Docker Images on Mac M1 Max (Oracle Database 19c) appeared first on DBASolved.

Categories: DBA Blogs

A Few Words About OAC Embedding

Rittman Mead Consulting - Thu, 2022-09-22 12:10
TL;DR To exit VIM you press Esc, then type :q! to just exit or :wq to save changes and exit and then press Enter.

Some time ago and by that I mean almost exactly approximately about two years ago Mike Durran (https://insight2action.medium.com) wrote a few blogs describing how to embed Oracle Analytics Cloud (OAC) contents into public third-party sites.

Oracle Analytics Cloud (OAC) Embedding— Public User Access — Part 1
Introduction
Oracle Analytics Cloud (OAC) Embedding — Public User Access — Part 2
Introduction

For anyone who needs to embed OAC reports into their sites, these blogs are a must-read and a great source of valuable information. Just like his other blogs and the official documentation, of course.

Visualizing Data and Building Reports in Oracle Analytics Cloud
The topics in this section explain how to use the JavaScript embedding framework to embed Oracle Analytics content into applications and web pages.

If you have ever tried it, you most likely noticed that the embedding process is not exactly easy or intuitive. Roughly it consists of the following steps:

  1. Create content for embedding.
  2. Setup infrastructure for authentication:
    2.1. Create an Oracle Identity Cloud Service (IDCS) application.
    2.2.Create an Oracle Functions function.
    2.3. Set up Oracle API Gateway.
  3. Embed JavaScript code to the third-party site.

Failing to implement any of the above leads to a fully non-functional thing.

And here is the problem: Mike knows this well. Too well. Some things that are entirely obvious to him aren't obvious to anyone trying to implement it for the first time. When you know something at a high level, you tend to skip bits and bobs here and there and various tasks look easier than they are.

A small story When I was studying at the university, our techer told us a story. Her husband was writing a math book for students and wrote the infamous phrase all students love: "... it is easy to prove that ...". She said to him that, if it was easy to prove, he should do it.

He spent a week proving it.

That is why I think that I can write something useful on this topic. I'm not going to repeat everything Mike wrote, I'm not going to re-write his blog. I hope that I can fill in a few gaps and show some it is easy to do things.

Also, this blog is not intended to be a complete step-by-step guide. Or, at least, I have no intention of writing such a thing. Although, it frequently happens that I'm starting to write a simple one-paragraph hint and a few hours later I'm still proofreading something with three levels of headers and animated screen captures.

Disclaimer. This blog is not a critique of Mike's blog. What he did is hard to overestimate and my intention is just to fill some gaps.

Not that I needed to make the previous paragraph a disclaimer, but all my blogs have at least one disclaimer and once you get locked into a serious disclaimers collection, the tendency is to push it as far as you can.

Testing out Token Generation

My main problem with this section is the following. Or, more precisely, not a problem but a place that might require more clarification in my opinion.

You’ll see that the token expires in 100 seconds and I describe how to increase that in a separate blog. For now, you can test this token will authenticate your OAC embedded content by copying the actual token into the following example HTML and deploying on your test web server or localhost (don’t forget to add suitable entries into the OAC safe domains page in the OAC console)

I mean why exactly 100 seconds is a bad value? What problem does increasing this value solve? Or, from the practical point of view, how do we understand that our problem is the token lifespan?

It is easy and confusing at the same time. The easy part is that after the token is expired, no interaction with the OAC is possible. It is not a problem if you embed non-interactive content. If the users can only watch but do not touch, the default value is fine. However, if the users can set filters or anyhow interact with reports, tokens must live longer than the expected interaction time.

Here is what it looks like when the token is OK:

And the same page a few minutes later:

Assuming that we don't know the right answer and need to find it, how do we do it? The browser developer console is your friend! The worst thing you can do to solve problems is to randomly change parameters and press buttons hoping that it will help (well, sometimes it does help, but don't quote me on that). To actually fix it we need to understand what is going on.

To be fair, at first sight, the most obvious and visible message is totally misleading. Normally, we go to the Console tab (Ctrl+Shift+J/Command+Option+J) and read what is written there. But if the token is expired, we get this:

The console shows multiple CORS errors: Access to XMLHttpRequest at 'https://OAC-INSTANCE-NAME.analytics.ocp.oraclecloud.com/ui/dv/ui/api/v1/sessioninfo/ext' from origin 'https://THIRD-PARTY-SITE' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. CORS stands for Cross-Origin Resource Sharing. In short, CORS is a security mechanism implemented in all modern browsers which allows for specifying if content from one server may be embedded into another server.

So looking at this we might assume that the solution would be either specify Safe domains in OAC or set CORS policy for our Web server, or both. In reality, this message is misleading. The real error we can get from the Network tab.

Let's take a look at the first failed request.

Simply click it once and check the Headers tab. Here we can clearly see that the problem is caused by the token, not by CORS. The token is expired.

The same approach shows when there is something wrong with the token. For example, once I selected a wrong OAC instance for the Secure app. Everything was there. All options were set. All privileges were obtained. The token generation was perfect. Except it didn't work. The console was telling me that the problem was CORS. But here I got the real answer.

Oracle Functions Service

I feel like this is the part which can use more love. There are a few easy-to-miss things here.

And the most important thing is why do we need Oracle Functions at all? Can't we achieve our goal without Functions? And the answer is yes, we can. Both Oracle Functions and API Gateways are optional components.

In theory, we can use the Secure application directly. For example, we can set up a cron job that will get the token from the Secure application and then embed the token directly into static HTML pages using sed or Python or whatever we like. It (theoretically) will work. Note, that I didn't say it was a better idea. Or even a good one. What I'm trying to say is that Functions is not an essential part of this process. We use Oracle Functions to make the process more manageable, but it is only one of the possible good solutions, not the only one.

So what happens at this step is that we are creating a small self-containing environment with a Node.js application running in it. It all is based on Docker and Fn Project, but it is not important to us.

The function we are creating is a part required to simplify the result.

High-level steps are:

  1. Create an application.
  2. Open the application and either use Cloud Shell (the easy option) or set up a development machine.
  3. Init a boilerplate code for the function.
  4. Edit the boilerplate code and write your own function.
  5. Deploy the code.
  6. Run the deployed code.

Creating a new function is easy.  Go to Developer Services -> Applications

Create a new function and set networking for it. The main thing to keep in mind here is that the network should have access to Oracle Cloud Infrastructure Registry. If it doesn't have access, you'll get Fn: Error invoking function. status: 502 message: Failed to pull function image error message when trying to run the function: Issues invoking functions.

The first steps with Oracle functions are simple and hard at the same time. It is simple because when you go to Functions, you see commands which should be executed to get it up and running. It is hard because it is not obvious what is happening and why. And, also, diagnostics could've been better if you ask me.

After you create an application, open it, go to the Getting started, press Launch Cloud Shell and do what all programmers do: copy and paste commands trying to look smart and busy in the process. Literally. There are commands you can copy and paste and get a fully working Hello World example written in Java. Just one command has a placeholder to be changed.

Hint: to make your life easier first do step #5 (Generate an Auth Token) and then come back to the steps 1-4 and 6-11.

If everything is fine, you will see "Hello, world!" message. I wonder, does it make me a Java developer? At least a junior? I heard that is how this works.

OK, after the Java hello-world example works, we can add Node.js to the list of our skills. Leave the Java hello-world example and initialize a new node function:

cd
fn init --runtime node embed_func

This creates a new Node.js boilerplate function located in the embed_func directory (the actual name is not important you can choose whatever you like).  Now go to this directory and edit the func.js file and put Mike's code there.

cd embed_func
vim func.js

- do some vim magic
- try to exit vim

I don't feel brave enough to give directions on using vim. If you don't know how to use vim but value your life or your reason, find someone who knows it.

But because I know that many wouldn't trust me anyways, I can say that to start editing the text you press i on the keyboard (note -- INSERT -- in the bottom of the screen) then to save your changes and exit press Esc (-- INSERT -- disappears) and type :wq and press Enter. To exit without saving type :q! and to save without exiting - :w . Read more about it here: 10 Ways to Exit Vim Editor

Image source: https://www.linuxfordevices.com/tutorials/linux/exit-vim-editor

Most likely, after you created a new node function, pasted Mike's code and deployed it, it won't work and you'll get this message: Error invoking function. status: 504 message: Container failed to initialize, please ensure you are using the latest fdk and check the logs

I'm not a Node.js pro, but I found that installing NOT the latest version of the node-fetch package helps.

cd embed_func
npm install node-fetch@2.6.7

At the moment of writing this, the latest stable version of this package is 3.2.10: https://www.npmjs.com/package/node-fetch. I didn't test absolutely all versions, but the latest 2.x version seems to be fine and the latest 3.x version doesn't work.

If everything was done correctly and you managed to exit vim, you can run the function and get the token.

fn invoke <YOUR APP NAME> <YOUR FUNCTION NAME>

This should give you a token every time you run this. If it doesn't, fix the problem first before moving on.

Oracle API Gateway

API Gateway allows for easier and safer use of the token.

Just like Functions, the API Gateways is not an essential part. I mean after (if) we decided to use Oracle Functions, it makes sense to also use Gateways. Setting up a gateway to call a function only takes a few minutes, no coding is required and things like CORS or HTTPS are handled automatically. With this said API Gateways is a no-brainer.

In nutshell, we create an URL and every time we call that URL we get a token. It is somewhat similar to where we started. If you remember, the first step was "creating" an URL that we could call and get a token. The main and significant difference is that now all details like login and password are safely hidden behind the API Gateway and Oracle Functions.

Before Functions and Gateway it was:

curl --request POST \
 --url https://<IDCS-domain>.identity.oraclecloud.com/oauth2/v1/token \
 --header 'authorization: Basic <base64 encoded clientID:ClientSecret>' \
 --header 'content-type: application/x-www-form-urlencoded;charset=UTF-8' \
 -d 'grant_type=password&username=<username>&password=<password>&scope=\
 <scope copied from resource section in IDCS confidential application>'

With API Gateways the same result can be achieved by:

curl --request https://<gateway>.oci.customer-oci.com/<prefix>/<path>

Note, that there are no longer details like login and password, clientID and ClientSecret for the Secure application, or internal IDs. Everything is safely hidden behind closed doors.

API Gateways can be accessed via the Developer Services -> [API Management] Gateways menu.

We click Create Gateway and fill in some very self-explanatory properties like name or network. Note, that this URL will be called from the Internet (assuming that you are doing this to embed OAC content into a public site) so you must select the network accordingly.

After a gateway is created, go to Deployments and create one or more, well, deployments. In our case deployment is a call of our previously created function.

There are a few things to mention here.

Name is simply a marker for you so you can distinguish one deployment from another. It can be virtually anything you like.

Path prefix is the actual part of the URL. This has to follow rather strict URL rules.

The other very important thing is CORS. At the beginning of this blog I already mentioned CORS but that time it was a fake CORS message. This time CORS is actually important.

If we are embeddig OAC content into the site called https://thirdparty.com, we must add a CORS policy allowing us to do so.

If we don't do it, we will get an actual true authentic CORS error (the Network tab of the browser console):

The other very likely problem is after you created a working function, exited vim, created a gateway and deployment, and defined a deployment, you are trying to test it and get an error message {"code":500,"message":"Internal Server Error"}:

If you are getting this error, it is possible that the problem is caused by a missing policy:

Go to

And create policy like this:

ALLOW any-user to use functions-family in compartment <INSERT YOUR COMPARTMENT HERE> where ALL { request.principal.type= 'ApiGateway'}

A few minor things

It is rather easy to copy pieces of embedding code from the Developer menu. However, by default this menu option is disabled.

It can be enabled in the profile. Click your profile icon, open Profile then Advanced and Enable Developer Options. It is mentioned in the documentation but let's be real: nobody reads it.

If you simply take the embedding script, it won't work.

This code lacks two important modules: jquery and obitech-application/application. If either of them is missing you will get this error: Uncaught TypeError: application.setSecurityConfig is not a function. And by the way, the order of these modules is not exactly random. If you put them in an incorrect order, you will likely get the same error.

As a conclusion

After walking this path with a million ways to die we get this beautifully looking page: Niðurstaða stafrænna húsnæðisáætlana 2022

https://hms.is/husnaedi/husn%C3%A6%C3%B0isa%C3%A6tlanir/m%C3%A6labor%C3%B0-husn%C3%A6%C3%B0isa%C3%A6tlana/ni%C3%B0ursta%C3%B0a-stafr%C3%A6nna-husn%C3%A6%C3%B0isa%C3%A6tlana-2022
Categories: BI & Warehousing

OAC Semantic Modeler and Version Control with Git

Rittman Mead Consulting - Wed, 2022-09-21 12:35

This is my third blog post in the series of posts about OAC's Semantic Modeler. The first one was an overview of the new Semantic Modeler tool, the second was about the new SMML language that defines Semantic Modeller's objects. This post is about something that OBIEE developer teams have been waiting for years - version control. It looks like the wait is over - Semantic Modeler comes with native Git support.

When you open Semantic Modeler from OAC, you will see two toggle buttons in the bottom right corner:

The right toggle is for Git Panel, where version control magic takes place.

Enabling Git for a Semantic Model

Version control with Git can be enabled for a particular Semantic Model, not the whole Modeller repository. When first opening the Git Panel, it will inform you it requires configuration.

Click Start and you will be asked for a Git Repository URL and the name of the main branch. I created my test repository on Github but you may have your own company internal Git server. The easiest way to establish version control for a Model is to create an empty Git repository beforehand - that is what I did. In the "Initialize Git" prompt, I copied the full https URL of my newly created, empty (no README.md in it!) Github repository and clicked "Continue".

If the repository URL is recognised as valid, you will get the next prompt to choose a Git profile, which is your Git logic credentials. To create a new profile, add your git user name and password (or Personal Access Token if you are using Github) to it and name your profile.

Click "Initialize Git". After a short while, a small declaration of success should pop up...

... and the Git Panel will now have a typical set of Git controls and content.

Next, let us see it in action.

Git and Semantic Modeler - the Basics

The basics of Semantic Modeler's version control are quite intuitive and user friendly. Let us start by implementing a simple change to our Customers dimension, let us rename a column.

We type in the column name, press Enter. We see that the Unstaged Changes list in the Git Frame is still empty. We press Ctrl+S to save our changes and the Unstaged Changes list gets updated straight away.

We click on "Stage All". At the bottom of the Git panel, "Commit description" input and "Commit" button appear.

We enter a description, click "Commit" and get a message:

However, the changes have not yet been pushed to the Git server - we need to push them by clicking the "Push" button.

Now let us check the repository content in the Git server.

We can see the "Dim - Customers.json" SMML file has just been updated.

Git and Semantic Modeler - Working with Branches

At Rittman Mead we are evangelists of Gitflow - it works well with multiple developers working independently on their own features and allow us to be selective about what features go into the next release. The version control approach we have developed for OBIEE RPD versioning as part of our BI Developer Toolkit relies on Gitflow. However, here it is not available to us. No worries though - where there is a will, there is a way. Each of our devs can still have their own branch.

Before we start playing with branches, let us make sure our main branch is saved, checked in and pushed. To create a new branch, we click on the "Create Local Branch" button.

We base it on our current "main" branch. We name it "dev-janis" and click "Create".

If successful, the Current branch caption will change from "main" to "dev-janis". (An important part of version control discipline is to make sure we are working with the correct branch.)

In our dev branch, let us rename the "LAST_NAME" column to "Last Name".

Save.

Stage. Commit. Push.

Once pushed, we can check on the Git server, whether the new branch has been created and can explore its content.

We can also switch back to the "main" branch to check that the "LAST_NAME" column name remains unchanged there.

Git and Semantic Modeler - Merge Changes

The point of having multiple dev branches is to merge them at some point. How easy is that?

Let us start with changes that should merge without requiring conflict resolution.

In the previous chapter we have already implemented changes to the "dev-janis" branch. We could merge it to the "main" branch now but Git would recognise this to be a trivial fast-forward merge because the "main" branch has seen no changes since we created the "dev-janis" branch. In other words, Git does not need to look at the repository content to perform a 3-way merge - all it needs to do is repoint the "main" branch to the "dev-janis" branch. That is too easy.

Before merging, we will implement a change in the "main" branch.

We switch to the "main" branch.

We rename the "INITIALS" column to "Initials".

Save. Stage. Check in. Push.

Let us remind ourselves that in the "dev-janis" branch, the INITIALS column is not renamed and the LAST_NAME column is - there should be no merge conflicts.

To merge the "dev-janis" branch into the "main" branch, we switch to the "main" branch. We click the "Merge" button.

We select the Merge branch to be "dev-janis" and click "Merge".

The Merge Strategy value applies to conflict resolution. In our simple case, there will be no merge conflicts so we leave the Strategy as Default.

After the merge, I could see moth the "INITIALS" and the "LAST_NAME" columns renamed - the merge worked perfectly!

Save. Stage. Check In. Push.

Well, how easy was that!? Anybody who has managed an OBIEE RPD multidev environment will the new Semantic Modeler.

Git and Semantic Modeler - Merge Conflict Resolution

At last we have come to merges that require conflict resolution - the worst nightmare of OBIEE RPD multidev projects. How does it work with OAC's Semantic Modeler?

Let us create a trivial change that will require conflict resolution - we will rename the same column differently in two branches and then will merge them. The default Git merge algorithm will not be able to perform an automatic 3-way merge.

We use our trusted Customers dimension, we select the "main" branch and change the column "DOB" name to "Date of Birth".

"main" branch:

Save. Stage. Check in. Push.

In the "dev-janis" branch, we rename the same "DOB" column to "DoB".

"dev-janis" branch:

To merge, we switch back to the "main" branch. (Pull if necessary.) As the branch to be merged with, we choose "dev-janis".

As for Merge Strategy, we have 3 options here: Default, Ours and Theirs. (In our example, Ours would be the "main" branch whereas Theirs would be the "dev-janis".) We can use the Ours and Theirs strategies if we are 100% certain in case of a conflict we should always prefer the one or the other branch. In most cases however, we want to see what the conflicts are before deciding upon the resolution, therefore I expect the Default Strategy will almost always be used. You can read more about Merge Strategies in OAC's documentation here.

We call the Merge command.

As expected, we get merge errors - two of them. (The reason there are two is because our Business Layer column names are automatically propagated to the Presentation Layer - hence we get two errors - one from Business Layer and the other from Presentation.)

We click on the first conflict. Lo and behold - we get a proper A/B conflict merge window that does a decent job at showing us the merge conflict. However, I did find it to be a bit buggy - the "Resolve Item" and "Resolve All" buttons only work when you click on their edges. Also the A and B version toggle buttons did not work at all for me.

However, I got it working by using the Take All buttons, which worked fine for me, since there was only a single conflict to resolve. I wanted the B version so I clicked the Take All button in the B frame and then clicked the Resolve Item button and then pressed Ctrl+S. That did the trick and the error disappeared from the Merge Conflicts list. The same I did with the Presentation Layer conflict. After that, there were no more Merge conflicts in the Merge frame and I got a message there: "Merge successful".

And successful it was. This is the end result - the Customer dimension in the "main" branch after a conflict-resolved merge.

Version control merge conflict resolution can be challenging. I recommend you read the Understand and Resolve Merge Conflicts chapter from the OAC documentation.

Conclusions

As of the time of this writing, version control features like change staging, checking in, push and pull, switching between branches, appear to be rock-solid. When it came to merging and in particular merge conflict resolution, the version control functionality appears a bit more capricious but it still worked for me.

Overall, Git support in Semantic Modeler looks well designed and will be a huge improvement over OBIEE RPD versioning.

If you want to run advanced queries against your repository content from Python, if you want to auto-generate Semantic Model content with scripts, version control with Git will enable that. And it will be easier than with OBIEE RPD.

Categories: BI & Warehousing

Autonomous Database Strategic Customer Program

Tom Kyte - Wed, 2022-09-21 05:26
Is there a link to information about the ADB-S Strategic Customer Program?
Categories: DBA Blogs

HTMX: Saving Form Changes - Django CRUD, part 4

Andrejus Baranovski - Wed, 2022-09-21 03:29
I explain how to run POST request through HTMX attribute on HTML form tag to save form changes to the backend. You will see how to report validation errors through HTMX response.

 

Database Patching : It’s a difficult subject

Tim Hall - Wed, 2022-09-21 02:42

If you came hear hoping I was going to say there are valid reasons not to patch, you are out of luck. There is never a valid reason not to patch… Instead this post is more about the general approach to patching. I’ve spent 22+ years writing about Oracle, including how to install it, but … Continue reading "Database Patching : It’s a difficult subject"

The post Database Patching : It’s a difficult subject first appeared on The ORACLE-BASE Blog.Database Patching : It’s a difficult subject was first posted on September 21, 2022 at 8:42 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Fluid Fundamentals 90-minute Webcast

Jim Marion - Tue, 2022-09-20 10:04

 I am constantly amazed at the flexibility of PeopleSoft's Fluid UX. Through AddJavaScript and AddStylesheet, we can make the user interface do or appear however we desire. But in all our cleverness, we can't neglect the fundamentals such as:

  • What are the differences between Classic and Fluid?
  • What are the purposes of the various Fluid layouts?
  • How do I align columns and fields in Fluid?
  • What do I do with related display fields? Why won't they appear beside their control fields?
  • How do you render grids on mobile?
  • Is there a place for the scroll area in Fluid?
We regularly teach Fluid development principles. As developers become familiar with Fluid's CSS capabilities, they begin writing their own CSS. Is that OK? Should we write our own CSS? Absolutely! A better question is, "when?" When should we write our own CSS? When should we inject our own JavaScript? These are the questions we answer through Fluid fundamentals. Whether you have years of experience with Fluid or you are just starting your Fluid journey, we all learn by getting back to the basics: The Fundamentals. So join us on September 29th from 1:00 to 2:30 PM Central for 90 minutes of Fluid Fundamentals. Register now!

Saving the World from Fat-finger moments – with regexp_like

The Anti-Kyte - Tue, 2022-09-20 01:30

It’s not uncommon for a database application to have it’s behaviour defined, to an extent at least, by records in reference data tables.
By it’s nature, this data is static and the tables in which it resides tend to contain comparatively few rows. However, such an approach can be susceptible to erroneous data entry, especially where key values are concerned.
Having spent many an “entertaining” afternoon/evening/dead-of-night, trying to hunt down some mystery bug, only to find that one of these values includes an extraneous space or invisible control character, I’ve come to appreciate the ability of regexp_like to point these things out.
The code examples that follow should be based on some sensible data set, probably from the HR demo schema. However, Dr Fatfinger does sound rather like a Bond villain…

Over the years, MI6 has built up a list of individuals whose activities bear close scrutiny.
They want to know if any of these people start bulk buying piranha food, or looking at hollowed out volcanoes on Zoopla :

create table megalomaniacs (
    evil_genius varchar2(100),
    start_date date,
    constraint megalomaniacs_pk primary key (evil_genius))
/

insert into megalomaniacs( evil_genius, start_date)
values('DOCTOR NO', to_date('01-OCT-1962', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('ERNST STAVRO'||chr(256)||' BLOFELD', to_date('10-OCT-1963', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values(' AURIC GOLDFINGER', to_date('17-SEP-1964', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('EMILIO LARGO', to_date('09-DEC-1965', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('DOCTOR KANANGA', to_date('27-JUN-1973', 'DD-MON-YYYY'));
	
insert into megalomaniacs( evil_genius, start_date)
values('FRANCISCO SCARAMANGA', to_date('19-DEC-1974', 'DD-MON-YYYY'));
	
insert into megalomaniacs( evil_genius, start_date)
values('KARL STROMBERG', to_date('7-JUL-1977', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('HUGO DRAX ', to_date('26-JUN-1979', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('ARISTOTLE KRISTATOS', to_date('24-JUN-1981', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('KAMAL KHAN', to_date('06-JUN-1983', 'DD-MON-YYYY'));
	
insert into megalomaniacs( evil_genius, start_date)
values('MAX ZORIN', to_date('22-MAY-1985', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('GENERAL KOSKOV', to_date('29-JUN-1987', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('FRANZ SANCHEZ', to_date('13-JUN-1989', 'DD-MON-YYYY'));
	
insert into megalomaniacs( evil_genius, start_date)
values('ALEC TREVELYAN', to_date('13-NOV-1995', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('ELLIOT CARVER', to_date('09-DEC-1997', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('ELEKTRA KING', to_date('08-NOV-1999', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('COLONEL TAN-SUN MOON', to_date('20-NOV-2002', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('MR WHITE', to_date('14-NOV-2006', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('DOMINIC GREEN', to_date('20-OCT-2008', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('RAOUL SILVA', to_date('23-OCT-2012', 'DD-MON-YYYY'));

insert into megalomaniacs( evil_genius, start_date)
values('LYUTSIFER SAFIN	', to_date('28-SEP-2021', 'DD-MON-YYYY'));

commit;

However, some of these people are slipping through the net…


select evil_genius, to_char(start_date, 'DD-MON-YYYY') as start_date
from megalomaniacs 
where evil_genius in ( 'MR WHITE', 'LYUTSIFER SAFIN', 'AURIC GOLDFINGER', 'ERNST STAVRO BLOFELD');

EVIL_GENIUS                    START_DATE          
------------------------------ --------------------
MR WHITE                       14-NOV-2006         

1 row selected. 

We suspect the handy work of Dr Fatfinger, possibly through the activities of those notorious henchpeople, Copy and Paste.

Fortunately, we can use a regexp to identify any records that contain :

  • a leading or trailing non-printing character
  • a control character
select evil_genius, length( evil_genius),
    to_char(start_date, 'DD-MON-YYYY') as start_date
from megalomaniacs
where regexp_like(evil_genius, '^[[:space:]]|[[:cntrl:]]|[[:space:]]$', 'i');

Ah, Dr Fatfinger. We’ve been expecting you !

Mock Service URLs for PeopleSoft Testing

Jim Marion - Mon, 2022-09-19 14:21

 As you create, learn, and discover Integration Broker's capabilities, it is helpful to have mock APIs for testing purposes. Here is a list of my favorites.

Many of these services have secure and non-secure alternatives. The value of insecure testing is you don't have to import certificates. On the other hand, the secure versions help you prove your PeopleSoft certificate import skills.

Do you have some sample APIs you use? If so, please share them in the comments!

At JSMpros, we teach PeopleSoft REST and SOAP integrations regularly. Want to learn more? Check out our Integration Tools Update course to learn best practices and strategies for incorporating REST and JSON into your development process!

Vagrant : “SSH auth method: private key” – Timed out…

Tim Hall - Sun, 2022-09-18 04:25

Out of nowhere I recently started to get problems with Vagrant running on a Windows 11 host. The “vagrant up” command would always hang at the “SSH auth method: private key” stage. You can see an example of the output here. default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key … Continue reading "Vagrant : “SSH auth method: private key” – Timed out…"

The post Vagrant : “SSH auth method: private key” – Timed out… first appeared on The ORACLE-BASE Blog.Vagrant : “SSH auth method: private key” – Timed out… was first posted on September 18, 2022 at 10:25 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Simple Code-only REST Request

Jim Marion - Wed, 2022-09-14 16:13

PeopleSoft is a metadata-driven application. It stands to reason, therefore, that the solutions we create require metadata. Sometimes metadata, which is supposed to be configurable, forces us down an immutable (non-configurable) path. Integration Broker is a fantastic example of semi-mutable, bordering on immutable metadata. As we prototype, we often make changes. But the testing associated with prototyping locks various metadata pieces. Sometimes we just want to test without metadata and backfill after we choose the proper path.

Here is a very simple metadata-free REST request example I run from a local App Engine program for prototyping and testing purposes.

Now, an important question:

Once you have a working "simple" solution, do you backfill and update with proper Integration Broker metadata?

Let us know your thoughts in the comments below!

At JSMpros, we teach Integration Tools concepts regularly. Check out our website to see what we are offering next!

OAC Semantic Modeler and the SMML Language

Rittman Mead Consulting - Wed, 2022-09-14 05:28

There are a few reasons to like the Oracle Analytics Cloud (OAC) paired with the new Semantic Modeler:

  • It works in the cloud. I mean, it actually works. (Check my blog post for an overview of Semantic Modeler.)
  • It has a native, built in git integration. (A blog post about that is coming shortly.)
  • The SMML language.

The SMML, which is short for Semantic Modeler Markup Language, is my personal favourite feature coming with the new Semantic Modeler.

The old UDML and the new SMML

In the old days the only way to manipulate the OBIEE RPD content automatically was to export the RPD in UDML format, tweak it and then import it back. UDML was proprietary, undocumented and an utter nightmare to parse - content analysis was usually done with sophisticated regex and it did not work very well. The use of UDML was mostly "local" when you copy an RPD object into Notepad, tweak it and then copy it back into the RPD. That all changed with the XML export feature - parsing of the whole repository became feasible, almost easy. At Rittman Mead we have parsed the entire RPD content and inserted it into a relational database to make it available for further analysis like lineage or data dictionary. The XML export capability was a game changer. However, for individual RPD object manipulation we were still stuck with UDML.

In OAC and with the new Semantic Modeler, UDML is replaced with SMML. It is documented. It is JSON-based, which I usually prefer to XML. Instead of modifying UDML at our own risk, Semantic Modeler objects are available in SMML format from the Modeler itself - the Modeler features a SMML editor.

Editing SMML in Semantic Modeler

To easiest access to Semantic Modeler objects in SMML format is from the Modeler itself. You right-click a Modeller object, choose Open in SMML Editor from the pop-up menu and a new tab will open with the script.

However, this way you are accessing only one object at a time. (You cannot multi-select objects to have them in a single SMML editor view.) Moreover, when opening a high-level object like a Business Model, you will not get full SMMLs of its Logical Tables - you will have to open the SMML Editor for each Logical Table separately. Also, some low-level objects like Columns do not have their own SMML - when opening SMML Editor for a particular Column, you will get the SMML code for the whole Table.

Let us give it a try. We go to the Logical Layer, double-click the "Dim - Customers" Logical Table to open it in designer.

Now, right-click the same "Dim - Customers" table and open the SMML Editor from the menu. The two tabs will sit next to each other.

Let us change a dataType value from NUMERIC to VARCHAR:

To save the changes, we either press Ctrl+S or click the save icon in the top right corner. Now let us go back to the designer tab. We see that the data type in the designer has changed from '99' to 'ab' - without refreshing the designer we can see the change there. (Note that the same change cannot be done from the designer - data types are normally derived from the Physical sources, instead of specified explicitly.)

Can repository content be broken in SMML Editor? Let us try that by renaming a JSON key:

When trying to save it, I get this error:

No explanation, no line number. In this case, the save did not take place. I can either revert the change in the SMML Editor itself or I can close the Editor tab and choose to discard changes.

However, the way errors come up is inconsistent. For some errors, I would get this message:

When I choose to proceed (common sense says No but I'm doing this for you!), the Dim - Customers dimension disappears from the Business Model...

But it is not all lost. It is now found in the Invalid Files section.

Here I can open it and this time the row with the error is actually highlighted. I remove the unneeded commas that make the JSON invalid and save it.

However, the file is still in the Invalid Files section and still missing from the Business Model - I don't think that is how it is supposed to work. Perhaps I am missing a simple trick here but I could not find an easy way of moving the Customer dimension back to the Business Model. The best solution for me was to use the git reset command.

I will describe git integration in my next blog post.

The SMML editing works well but when you get it wrong, it does not do a good job at telling what and where the issue is. However, it does tell you there is an issue and when that happens, do not save it!

We have seen that SMML editing works well for individual objects. But how about a whole repository export and import?

Whole Repository SMML Export and Import

The documentation does not go into much detail on whole repository exports and imports. I found two ways of exporting the whole repository in SMML format and one for importing it.

The easiest way to export the whole Semantic Model is to open it in Modeler and then open the triple-dot menu from the top right corner.

There you can choose to Export and then specify the name of the Zip file.

Upon pressing Export, the zip file gets generated and your browser will download it. When unpacking the archive, this is what you see - (at least) three folders...

...with lots of JSON files in them - those are SMML scripts.

An alternative way of exporting the whole repository in SMML format is to upload it to a git repository. In git, the repository is stored in exactly the same format as the export zip file.

When the repository is in git, you can clone the repository locally and edit it with a text editor - it will be the same as with a zip export. However, when done, you can commit and push your changes back to the git repository and then pull the repository changes from Semantic Modeler - this is the only way I found to import the whole repository into Semantic Modeler.

Querying OAC Repository SMML Content with Python

Why do I love the SMML so much? Because it presents a great opportunity for repository content analysis and development acceleration.

Now that we have the entire OAC Repository exported in JSON format, we can use Python to query it or even modify its content. Python is very good at handling data in JSON format - it is much easier than XML.

Let us start with something simple - in Python we open the Customers dimension JSON to count columns in it:

import json

with open("D:\OAC-Semantic-Modeller\Repo\logical\HelloBusinessModel\Dim - Customers.json") as f:
    customerJson = json.load(f)

    print(f"Logical Table Name is {customerJson['logicalTable']['name']} and its type is {customerJson['logicalTable']['type']}.")

    logicalColumnNames = [lc['name'] for lc in customerJson['logicalTable']['logicalColumns']]
    print(f"The table has {len(logicalColumnNames)} columns:\n\t{', '.join(logicalColumnNames)}")

These are just a few lines of code and the output looks like this:

Note above that in the SMML JSON, the name attribute is a single attribute whereas logicalColumns is a list. In Python they become a single attribute and a list accordingly.

It would be easy to modify the above script to do the same for more than one table, to run the script regularly to keep track of table column changes...

However, it would be more useful to do queries across multiple repository objects. With a bit more effort, we can do that.

Let us set a task to get a list of all Logical Table Columns in the format <Logical Table name>.<Logical Column name> (<Logical Column data type>) but without opening each JSON individually like we did in the first script. Instead we want to load the whole repository in memory and then run our queries.

import json
import os
from functools import reduce

def getFileContent(filePath):
    with open(filePath) as f:
        return json.load(f)


def getFolderContent(folderPath):
    folderPaths = [{'name': f, 'path': os.path.join(folderPath, f)} for f in os.listdir(folderPath) if not os.path.isfile(os.path.join(folderPath, f))]
    filePaths   = [{'name': f, 'path': os.path.join(folderPath, f)} for f in os.listdir(folderPath) if os.path.isfile(os.path.join(folderPath, f))]

    folderContent = [{f['name']: getFolderContent(f['path'])} for f in folderPaths]
    fileContent = [{f['name']: getFileContent(f['path'])} for f in filePaths]

    return reduce(lambda a, b: {**a, **b}, folderContent + fileContent)


smmlRootPath = "D:\OAC-Semantic-Modeller\Repo"  # this is the root path to where I unzipped the SMML export
allRepositoryJson = getFolderContent(smmlRootPath)

# get all Logical Tables in the Repository
logicalTableNames = [lt['logicalTable']['name'] for lt in allRepositoryJson['logical']['HelloBusinessModel'].values()]
print(f"There are {len(logicalTableNames)} Logical Tables in the repository: {', '.join(logicalTableNames)}\n")

# get all Logical Table Columns
ltColumns = []
for ltName in logicalTableNames:
    ltColumns.extend([
        f"{ltName}.{lc['name']} ({lc['dataType']})"
         for lc in allRepositoryJson['logical']['HelloBusinessModel'][ltName + '.json']['logicalTable']['logicalColumns']
    ])

div = "\n\t * "
print(f"There are {len(ltColumns)} Columns found across {len(logicalTableNames)} Logical Tables:{div}{div.join(ltColumns)}")

The script is still quite simple and now allows us to run pretty much any query we can think of against the repository. The (top of the) output looks like this:

The new SMML language allows for a much easier repository content analysis and manipulation with Python - I expect that no big OAC project in the future will go without a Python developer. Development standards checks, change monitoring, development speed measurement, lineage tracing, repetitive repository content generation - these are just a few scripting opportunities that come to mind.

Conclusions
  • The SMML language is a major improvement over the OBIEE's UDML language;
  • It is a good way of quickly editing a repository object in a text editor;
  • Editing SMML is more dangerous than using the standard object designer. The SMML editor will give you warnings if you try to save something dodgy but you can break things;
  • Use Python to run repository-wide queries and updates. It is easier than it was with OBIEE's XML extracts;
  • If planning to use SMML editing extensively, enable git and follow best version control practices - if you do break something, make sure it is your own dev branch.
Categories: BI & Warehousing

Recordings Available!

Jim Marion - Mon, 2022-09-12 13:29

Did you know we record all of our events? Replays are available through our online learning platform. Be sure to check out the webinar collection. Each replay includes sample code downloads, handouts, a Q&A roster, and links to additional resources. Topics include:

  • Integration (REST, SOAP, Cloud, Application Services Framework)
  • Fluid (Fundamentals, Drop Zones, Grids, Navigation, Complex CSS)
  • PeopleCode Application Classes
  • Isolated Customizations and Configuration Alternatives (Drop Zones, Event Mapping, Related Content, Related Actions, Activity Guides, Page and Field Configurator)

Do you prefer interactive training with plenty of hands-on activities? Check out our schedule to see what we are offering next!

Dumping redo

Jonathan Lewis - Mon, 2022-09-12 04:05

In the past I’ve sometimes had to dump the contents of the redo log to a trace file when I needed to find out what work Oracle was doing behing the scenes. To minimise the volume dumped by the “alter system dump logfile” command and make it easier to find the bit I wanted to see I used to “switch logfile” just before (and sometimes just after) the statement I was investigating.

With the advent of pluggable databases the “switch logfile” command now raises Oracle error: “ORA-65040: operation not allowed from within a pluggable database”, so I had to change the strategy. This is just a brief note (echoing a footnote to an older note) of the approach I now use:

column current_scn new_value start_scn
select to_char(current_scn,'9999999999999999') current_scn from v$database;

-- do something interesting here

column current_scn new_value end_scn
select to_char(current_scn,'9999999999999999') current_scn from v$database;

alter session set tracefile_identifier='sometextyoulike';

alter system dump redo scn min &start_scn scn max &end_scn ;
alter session set tracefile_identifier='';

The list of options for the dump has been extended since I published the note on dumping the log file, and now (19.11.0.0) allows the following options (using c notation for the type of the variables you supply to each parameter):

 rdba min  %d rdba max  %d tablespace_no  %d
 dba min  %u  %u dba max  %u  %u
 securefile_dba  %u  %u
 length  %d
 time min  %d
 time max  %d
 layer  %d
 opcode  %d
 scn min  %llu
 scn max  %llu
 xid  %d  %d  %d
 objno  %u
 con_id  %d
 skip corruption


If you try to restrict the dump on objno (object id) or xid (transaction id) then the trace file will skip any redo records generated by private threads / in-memory undo and report the text: “Skipping IMU Redo Record: cannot be filtered by XID/OBJNO”

The tablespace_no option can only be used when both rdba min and rdba max (rolback data block address range) have been specified.

The con_id option may only be legal when used to specify a PDB from the CDB

Remember – when you dump redo you get just the redo for your session; there is some scope for being selective, but the starting point would be all the redo for the PDB you’re working from.

Linux – Checking that two files contain the same data but in a different order – Sorted!

The Anti-Kyte - Mon, 2022-09-12 01:30

Trying to regression test a change to a feed-file generation program can be tricky. Whether the format is CSV or some fashionable markup language, the ordering of the result set tends to be unimportant in such circumstances.
When testing, we need to verify that the files produced by the new version of the program contain the same data as those produced by the old version, irrespective of the order in which the records are written.

Recently, I was rescued from my struggle with just such a problem by my colleague, Don Thomson, who imparted some (Linux) Jedi wisdom resulting in a simple yet effective solution, involving an inventive combination of Linux utilities.

What we’re going to look at here is :

  • comparing files with diff
  • using sort to give diff a hand
  • comparing sets of files in different directories using sum
  • ignoring trailer records in delimited files using grep or head
Some Data Files

The first file we’ll use in our example is called episodes.txt and contains the following :

one
two
three
four
five
six
seven
eight
nine

The file we’re comparing it to is called releases.txt :

four
five
six
one
two
three
seven
eight
nine

As you can see, the files contain the same data but the first is in numeral order and the second is the result of what may be considered a “Skywalker” sort.

Predictably, diff decides that they are not identical :

diff episodes.txt releases.txt

1,3d0
< one
< two
< three
6a4,6
> one
> two
> three

Before we go any further, let’s use some switches to minimize the diff output as, for the purposes of this exercise, we just want to tell whether or not the files are the same.

diff -qs episodes.txt release.txt

Files episodes.txt and releases.txt differ

Fortunately, Don knows just the thing to help us perform a data – rather than line-by-line – comparison…

Sorting it out

Let’s see what happens when we use the sort command on episodes.txt :

sort episodes.txt

eight
five
four
nine
one
seven
six
three
two

Interesting. It seems to have sorted the file contents ( “data”) into alphabetical order. Let’s see what it does with releases.txt :

sort releases.txt

eight
five
four
nine
one
seven
six
three
two

That’s useful. The output is identical. Now we just need to pass the sort output for each file to diff.
We can do this using sub-shells. As we’re running in Bash, the syntax for this is :

diff -qs <(sort episodes.txt) <(sort releases.txt)

Files /dev/fd/63 and /dev/fd/62 are identical

Note that the filenames in the output are the temporary files that hold the output (stdout) from each sub-shell.

Just to prove that this solution does detect when the rows of data are different in the files, let’s introduce a “rogue” one…

echo 'three-and-a-bit' >>episodes.txt

diff -qs <(sort episodes.txt) <(sort releases.txt)

Files /dev/fd/63 and /dev/fd/62 differ
Comparing two sets of files in different directories with a checksum

Whilst this approach works really well for comparing two files, you may find that it’s not that quick when you’re comparing a large number of large files. For example, we have a directory containing files that we generated before making any changes to our (fictitious) program :

mkdir baseline_text_files

ls -l baseline_text_files/*.txt

-rw-rw-r-- 1 mike mike 14 Sep 10 13:36 baseline_text_files/originals.txt
-rw-rw-r-- 1 mike mike 14 Sep 10 13:36 baseline_text_files/prequels.txt
-rw-rw-r-- 1 mike mike 17 Sep 10 13:36 baseline_text_files/sequels.txt

The file contents are :

cat originals.txt
four
five
six

cat prequels.txt
one 
two 
three

cat sequels.txt
seven 
eight
nine

Files generated after modifying the program are in the new_text_files directory :

cat originals.txt
four
six
five

cat prequels.txt
three
two
one

cat sequels.txt
eight
nine
seven

Don’s rather neat alternative to diffing each pair of files is to create a checksum for each file and write the output to a temporary file. We then just diff the files with the output for each directory.

There are a number of utilities you can use to do this and the complexity of the checksum algorithm used may impact the runtime for a large number of files.
In light of this, we’ll be using sum, which seems to be the simplest and therefore (in theory) the fastest.

A quick test first :

sum baseline_text_files/originals.txt
45749     1

The first number is the checksum. The second is the file block count.

Now we’ve identified the required utilities, this script should do the job. I’ve called it data_diff.sh and you can download it from Github should you feel inclined to do so. The link is here.

#!/bin/sh
# Difference between files in two directories
orig_dir=$1
new_dir=$2

TMPFILE1=$(mktemp)
TMPFILE2=$(mktemp)

for file in $orig_dir/*
do 
    sort $file |sum >> $TMPFILE1
done

for file in $new_dir/*
do
    sort $file|sum >>$TMPFILE2
done 
diff -qs $TMPFILE1 $TMPFILE2

is_same=$?

if [ $is_same -eq 1 ] 
then
    echo 'Files do not match'
else 
    echo 'Files are identical'
fi 

#delete the temporary files before exiting, even if we hit an error
trap 'rm -f $TMPFILE1 $TMPFILE2' exit

Run this and we get :

sh data_diff.sh baseline_text_files new_text_files

Files /tmp/tmp.OshLmwGL0J and /tmp/tmp.Uz2mUa0SSY are identical
Files are identical

If we introduce a difference in one of the existing files…

echo 'SOLO' >>new_text_files/sequels.txt

sh data_diff.sh baseline_text_files new_text_files

Files /tmp/tmp.4OGnjQls0S and /tmp/tmp.L7OUZyGUzl differ
Files do not match

Unsurprisingly, the script will also detect a difference if we’re missing a file…

touch baseline_text_files/tv_series.txt

sh data_diff.sh baseline_text_files new_text_files

Files /tmp/tmp.LsCHbhxK1D and /tmp/tmp.358UInXSJX differ
Files do not match
Ignoring Trailer Records in delimited files

With text delimited files, it’s common practice to include a trailer record at the end of the file to confirm it is complete.
This record will typically include the date (or timestamp) of when the file was created.

Such a file might look like this in the baseline_files directory
For example :

cat baseline_files/episodes.csv

HEADER|episode|title|release_year
I|The Phantom Menace|1999
II|Attack of the Clones|2002
III|Revenge of the Sith|2005
IV|A New Hope|1977
V|The Empire Strikes Back|1980
VI|Return of the Jedi|1983
VII|The Force Awakens|2015
VIII|The Last Jedi|2017
IX|The Rise of Skywalker|2019
TRAILER|9|20220903

The trailer in the corresponding file in the new directory includes a different date :

cat new_files/episodes.csv

HEADER|episode|title|release_year
I|The Phantom Menace|1999
II|Attack of the Clones|2002
III|Revenge of the Sith|2005
IV|A New Hope|1977
V|The Empire Strikes Back|1980
VI|Return of the Jedi|1983
VII|The Force Awakens|2015
VIII|The Last Jedi|2017
IX|The Rise of Skywalker|2019
TRAILER|9|20220904

To accurately compare the data in these files, we’ll need to ignore the trailer record.

Once again, there are numerous ways to do this. We could use :

grep -iv trailer baseline_files/episodes.csv

HEADER|episode|title|release_year
I|The Phantom Menace|1999
II|Attack of the Clones|2002
III|Revenge of the Sith|2005
IV|A New Hope|1977
V|The Empire Strikes Back|1980
VI|Return of the Jedi|1983
VII|The Force Awakens|2015
VIII|The Last Jedi|2017
IX|The Rise of Skywalker|2019

…which would result in our diff looking like this :

diff -qs <(sort baseline_files/episodes.csv|grep -iv trailer) <(sort new_files/episodes.csv| grep -iv trailer)

Alternatively, if we know that the trailer record is always the last line of the file we can use head to output everything apart from the last line :

head -n -1  baseline_files/episodes.csv.

HEADER|episode|title|release_year
I|The Phantom Menace|1999
II|Attack of the Clones|2002
III|Revenge of the Sith|2005
IV|A New Hope|1977
V|The Empire Strikes Back|1980
VI|Return of the Jedi|1983
VII|The Force Awakens|2015
VIII|The Last Jedi|2017
IX|The Rise of Skywalker|2019

…which would entail a diff command like this :

diff -qs <(head -n -1 baseline_files/episodes.csv| sort) <(head -n -1 new_files/episodes.csv| sort)

This being Linux there are probably several more options but these should cover at least some of the more common circumstances where comparison of file by data is required.

Analytic Functions

Tom Kyte - Mon, 2022-09-12 01:26
Tom, Thanks for providing this forum for answering questions. I'm trying to get the hang of analytic functions and I'm having a problem with the following code. I have to comment out "where dr <= 3)" to get it to work. "dr" shows up as the column heading but I add my conditional statement I get invalid column name. Help! Ps. I have your book and I hope you sign on the dotted line for the next one. select * FROM (select candidate_id, deptid, actual_start_date, annual_sal_amt, dense_rank() over (partition by deptid order by annual_sal_amt desc) dr from obapp1.offer where actual_start_date >= '01-JAN-2002' and annual_sal_amt > 0) -- where dr <= 3) order by deptid, annual_sal_amt desc
Categories: DBA Blogs

Analytical Functions- a brief introduction

Tom Kyte - Mon, 2022-09-12 01:26
Tom One of our main dissapointment is that , we cannot use analytical functions in sql from within a procedure, as they are not compatible with the pl/sql engine. Further, when I try to update my knowledge with the analytical functions, I get the doubt as to the total number of analytical functions available with 8i. Can you give a brief introduction to the analytical functions, giving the simplest example possible for each analytical function( as you have done in case of CASE). As far as I am concerned, the total number of analytical functions are 1.group by 2.rollbup 3.cube 4.partition 5.Rollover Tom, kindly give the simples of examples possible, so that we understand the concept first, and can then use them to reosolve complex functionalities. ( If we can use these above functions in pl/sql , I am curious, how much of coding it would reduce, I guess a lot, a lot lot infact. Thank you
Categories: DBA Blogs

HTMX: Fetch Edit Form Data - Django CRUD, part 3

Andrejus Baranovski - Sun, 2022-09-11 13:41
HTMX makes it possible to call the backend from HTML without JavaScript. You can add HTMX attribute to HTML tag, for example, button, and call backend endpoint. HTMX can process HTML fragment response and inject it into the Web page. This allows achieving partial page refresh, without full page reload.

Parallel Default

Jonathan Lewis - Fri, 2022-09-09 04:25

“Why did my query go parallel?”

It’s a question that crops up from time to time, usually followed by a list of reasons why it shouldn’t have gone parallel – no hints in the query, table is not declared parallel, parallel_degree_policy is set to manual etc.

When the question appeared recently on the Oracle developer forum it turned out that the table in question was declared as “parallel (degree default)”, which prompted the OP to ask the question: “is parallel = default not equivalent to parallel = 1”.

The answer to the question is that the two options are not equivalent – but that’s not the point of this note. Here’s a little script to test the claim:

drop table t1 purge;

create table t1 pctfree 90 as select * from all_objects where rownum <= 50000;

select degree, instances from user_tables where table_name = 'T1';

explain plan for select sum(object_id) from t1;
select * from table(dbms_xplan.display);

alter table t1 parallel (degree default);
select degree, instances from user_tables where table_name = 'T1';

explain plan for select sum(object_id) from t1;
select * from table(dbms_xplan.display);


I’ve created a table in the simplest possible way, but picked a fixed number of rows (to help reproducibility) and – because parallel is usually about “big” objects – I’ve left a lot of empty space (90%) in each block.

Then I’ve checked the execution plan for a very simple query that can only do a full tablescan, with the two declarations of parallelism set.

Here are the outputs of the 4 queries I’ve run:

DEGREE                                   INSTANCES
---------------------------------------- ----------------------------------------
         1                                        1

1 row selected.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     5 |  1275   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
|   2 |   TABLE ACCESS FULL| T1   | 50000 |   244K|  1275   (2)| 00:00:01 |
---------------------------------------------------------------------------

9 rows selected.


DEGREE                                   INSTANCES
---------------------------------------- ----------------------------------------
   DEFAULT                                        1

1 row selected.



PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3110199320

----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 |     5 |   350   (0)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |     5 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |     5 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |     5 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          | 50000 |   244K|   350   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| T1       | 50000 |   244K|   350   (0)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 4 because of degree limit

17 rows selected.


Clearly “parallel default” does not have the same effect as “parallel 1”. Any time you’ve got a query unexpectedly running parallel it’s possible that some table (or index on the table) has been created with a parallel degree of default. (More commonly, someone may have rebuilt an index “parallel N” to get the job done more quickly then forgotten to alter the index back to parallel 1 – or noparallel – afterwards.)

The point of this note, though, is that there are some questions you should not ask until you’ve spent a few minutes thinking about how you might create a model that gives you the answer. There are several reasons for this

  • The more you do it, the better and faster you get at modelling and understanding – and sometimes you really need to model a complex problem because you’re not allowed to show anything that looks like production in public.
  • If the simple model seems to disagree with the behaviour you see in production it may give you some clues about where to look in the production system for the source of the difference.
  • If the answer isn’t what you thought it would be you can change the question you put publicly to: “I thought Oracle would do X but it did Y; here’s how I tested, is there a flaw in the test?”

It took about 5 minutes for me to run up this demo – that might seem a bit quick but I’ve had a lot of practice (and it took a lot longer to write the note) – and it was, in this case, a waste of my time because I knew the answer; but I often run up little models before responding to questions on the forums or listservers because while I often think I know what the answer “ought” to be I do like to check before I say something that might be incorrect.

AWS EC2 Amazon Linux nginx: [emerg] unknown directive "stream"

Pakistan's First Oracle Blog - Fri, 2022-09-09 01:15

 Creating a reverse proxy for servers in restricted mode using NGINX in AWS EC2 Amazon Linux 2 can be frustrating. The following error is one of those quirks:

[root@ip-10-219-0-153 nginx]# nginx -t

nginx: [emerg] "server" directive is not allowed here in /etc/nginx/nginx.conf:9

nginx: configuration file /etc/nginx/nginx.conf test failed

Solution:

Here is the quick easy solution: Just install  nginx-mod-stream and  you should be fine.

[root@ip-10-219-0-153 nginx]# yum install nginx-mod-stream

Loaded plugins: extras_suggestions, langpacks, priorities, update-motd

amzn2-core                                                                                 | 3.7 kB  00:00:00

Resolving Dependencies

--> Running transaction check

---> Package nginx-mod-stream.x86_64 1:1.20.0-2.amzn2.0.5 will be installed

--> Finished Dependency Resolution


Dependencies Resolved


==================================================================================================================

 Package                     Arch              Version                         Repository                    Size

==================================================================================================================

Installing:

 nginx-mod-stream            x86_64            1:1.20.0-2.amzn2.0.5            amzn2extra-nginx1             87 k


Transaction Summary

==================================================================================================================

Install  1 Package


Total download size: 87 k

Installed size: 172 k

Is this ok [y/d/N]: y

Downloading packages:

nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64.rpm                                             |  87 kB  00:00:00

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : 1:nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64                                                   1/1

  Verifying  : 1:nginx-mod-stream-1.20.0-2.amzn2.0.5.x86_64                                                   1/1


Installed:

  nginx-mod-stream.x86_64 1:1.20.0-2.amzn2.0.5


Complete!

[root@ip-10-219-0-153 nginx]# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful


Hope that helps.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator