Tuesday, November 24, 2009

A Thankful DBA

Things to be thankful for…
Indexes that can help performance oh so much
Developers using bind variables
Backup tapes that work and are able to be used to restore
One night without a page
Plenty of memory and disk
Having a workaround for an Oracle bug
Successfully upgrading the database
Seeing all of the long hours of prep work,
run smoothly for a migration or upgrade without issues
Knowing that databases are backed up
Dynamic Oracle parameters
Only receiving 10 emails – of course that could mean something else is broken
Finding the table that the user dropped in the recyclebin
Tuning a statement from 20 hours to 2 minutes
Being able to actually use new features
Having a backup DBA in order to enjoy a day of rest

Happy Thanksgiving!

Wednesday, September 2, 2009

Restore that just wasn't wanting to happen

So, maybe I should read my own blog posting, or get more sleep, but I recently caused myself enough problems when trying to attempt a simple restore.
I had backups, check. I had a point in time when I wanted to restore to, check. I had a good reason to restore the whole database, check. What I didn't was the undivided attention that it needed or had planned properly for the things that went wrong.
So, I started off on my adventure. In knowing that there was no activity on the database I just choose any time around the point of failure, without checking for fuzzy issues. (I'll come back to fuzzy).
Opened up RMAN, connect to target, run script to allocate channels for tape and restore database until time, recover database until time. Restore started, and I thought in an hour I would be good to go again. Check back, still running, check back, still running. OK, that is strange, nothing showing issues just looks like it is hanging. Wonder if it is waiting for tapes. Simple thing to do, and what I should have done was just to call the backup team and ask about tapes I am trying to access. Instead, I thought, well, let's try again for a different time, because I just need it around this time, and maybe I will be able to hit different tapes.
Started it up again, and this time my computer crashed in the middle of this. So several hours later, restore still not complete, and now I really have a database that is not useable. Fun stuff.
Cleared out all of the processes that might have been left from over from the crash, picked my point in time. Contacted the backup team to make sure I didn't have locks on the tapes and they were available. Restore, recover. Open database - media data file 1 needs recovery. And this is where FUZZY comes in. The point in time, I had randomly picked without doing my homework, had a datafile restored that had a different SCN then the others. So, at this point of course I am wishing that I had done my homework, and that I had treated this restore as a production restore instead of thinking, it is just a test system, so no big deal.
I would like to say that after all of this, I was able to restore with the next attempt, but I ran into one more issue. Since I was trying to duplicate the production into test, I was using duplicate and the restore is using the flash recovery area, and guess what...all of these attempts and such had filled up that destination. Of course! Simple query to find out space available and clear this area out, ready for another attempt.
I am sure at this point you are either crying or laughing with me or at me. But I share this because there were several things I could have done along the way to make this restore simple to begin with. And even the simple tasks we perform can cause issues with the database or things that we touch. In not treating this at the same level as a production restore or issue, I wasn't prepared as I should have been. Did I create some great documentations for problems and how to fix them to prevent this in the future? I sure did! But that really shouldn't be the point of doing a restore. I am hoping to save others from going through the same process and trouble, and it has already been documented ;-)

Monday, August 24, 2009

Characterset Woes

Ever create a database with a characterset only to find out later the application requires something different. OK, so now what, recreate the database? Change the character set?
Changing the characterset is definitely an option but there are some hoops to go through to make this happen. Depending on when it is discovered that a different characterset is needed recreating the database is a valid option.
So, since there are issues and things to work through with charactersets, lets go through some basic discussions to have to decide what characterset to use first. With international databases and several platforms offering national characterset datatypes, there are several combinations and charactersets to choose from. I was of the mindset to just use the current UTF8 version and then set the varchars big enoug to handle any language that comes its way. Now this might work for an application where there is discussion about the datatypes and control over the code with the developers, but for reporting and other applications sitting on top of the database this might not be the best approach. Make sure to double check and maybe even ask again with the vendor to know which database characterset and which national characterset is needed. Also, when looking at what characterset to use the Oracle Globalization documentation does provide some helpful hints as well as thinking about supersets in planning if you have to change.
With great planning or possibly needing to use an existing database, a characterset change might still be needed. There are several good notes out there and tricks on how to do this, but I thought I would add my quick checklist to here to help out where possible, since I just went through this pain. In my case I have existing databases that now the NCHARs and NVARCHARs will be used and the vendor has a specific national characterset that is needed.
I decided that I didn't want to recreate the database and do an export and import to switch over, but checked to make sure that NCHARs, NVARCHARs and NBLOBs (NCLOBs) etc. are not being used currently. So there are no values here from a user perspective but might be some in the system tables. If there were any N-values then export these tables and truncate them. It is not a problem to have them in the database but if there are values populating these columns are the problem. The characterset that was needed is a strict superset of the current characterset and again the Oracle documentation will provide a list of which charactersets can change to others.
Now it appears that it is just a quick alter database national character set NEW_CHARACTERSET, right? Probably not. Additional checks are needed. Also, assumptions here are being made, that a spfile is being used, RAC clusters are altered to single instance mode to change the characterset and the checks of the data types being supported in the new characterset has been completed.
XBD tables use N-data, and this can be truncated if under 7 rows are in the tables xdb.XDB$QNAME_ID and xdb.XDB$NMSPC_ID (open a case with Oracle with more than 7). These are the tables that caused me a lot of headache because I kept getting the ORA-12717: Cannot issue ALTER DATABASE NATIONAL CHARACTER SET when NCLOB, NCHAR
or NVARCHAR2 data exists, and wasn't sure where it was coming from.
After dealing with this data, run the csscan FULL=Y TONCHAR=UTF8 LOGcheck CAPTURE=Y ARRAY=1000000 PROCESS=2 as sysdba.
Shutdown the database and startup in restrict mode. Other parameters that need to be set job_queue_processes=0 and aq_tm_processes=0, then ALTER DATABASE NATIONAL CHARACTER SET NEW_CHARACTERSET, run $ORACLE_HOME/rdbms/admin/csalter.plb.
New character should now be set, and then the next steps are just to put things back the things that were changed to make this happen.
Set job_queue_processes and aq_tm_processes back to the original values, and then shutdown and startup. Don't forget about the data in the XDB tables which can be inserted from $ORACLE_HOME/rdbms/admin/catxdbtm.sql.
Are you now understanding why I started this off with choose your characterset wisely? There are several steps that are needed for the change as well as knowing that the database is able to change over and data is either not there or able to export to make it happen. These are just some of the highlights that I ran into going through these steps which will hopefully help someone out with their next characterset change.

Friday, August 21, 2009

Change Controls and Audits

Some of the day to day things we do as database administrators are not completely understood by people that might be reviewing the change or auditing the changes. So, for them a rebuild of an index or analyzing statistics might not be as straight forward. And are these even considered changes in the databases and why would they need change controls around them? Well, even adding space to a tablespace could cause trouble on the database. It would have to be a really bad, but it is possible to mistype where a datafile is supposed to go or fill up a file system with the wrong size information (thank goodness for resize). Needless to say the things we do against the database even though minor can have impact on a system and maybe reviewed by a change board because of the process controls for compliance.
Now in going back and considering that someone reviewing a minor change may not have the information or experience about what that change does, and analyzing statistics could mean something very different in their world, why not provide them with some basic information. Instead of submitting a change, rebuild indexes and leaving it at that, state: Rebuilding indexes online to reduce fragmentation of the index space usage for better performance of the indexed data. This does not change any of the data within the table or index, just reorders it again for quicker access and this can occur while users are accessing the system. Or same with statistics, updating table statistics which provide Oracle the information about the table, such as row counts, how many distinct values, indexes and more information about the type of data to develop a good query plan to access the data as efficiently as possible.
Just a little more details about why and what is changing, and honestly makes the change a little less scary. It also provides information about data changes, which from a SOX perspective is very important if a task a DBA does is changing data. Now, as DBAs, we don’t want to have the responsibility of changing any data, but people reviewing changing and verifying processes might just need the verification of the task that is performed is not doing that. They might know that system types permissions may allow for that, so more details that can be provided about a change is useful.
This also comes to patching and applying CPUs (Critical Patch Updates). In reading the release notes and understanding the areas that might be affected, and providing some basic information about that. For example, there is a security fix that might touch a type of driver connection, so testing in the implementation of the patch includes the testing of the connection to the database through this driver and verifies that all connects are still good. Or even stating, the application doesn’t connect through this driver, so there is no effect with this change. However, still as part of a test plan there is probably connection testing from the application. Test plans can reflect the details of the security fixes, or just a quick description of the issues being fixed with some more basic information can really help when approving a change or reviewing and validating a change is what it is.
So, words that are thrown out between DBAs, rebuild, statitics, CPUs, might have a different meaning to others outside of the world we live but are needing to review or approve changes we make. More details or providing some basic training on what some of these simple, minor tasks performed again the database will help bridge that gap. Both sides will benefit from understanding the change for approvals and validation of the processes being followed.

Monday, August 3, 2009

Never under estimate a backout plan

Every well planned and thought out change could be implemented without problems in several environments. But it only takes small issue, a missed step or something that wasn't completely tested to cause an issue. Following a process to implement a change is important, but knowing what steps change be recovered from or rolled back are extremely important.
Can a step be repeated without an issue, what happens if you have an error after a step and the all dreaded forgetting a step? Checks through out the process and knowing if an error there means redoing everything or just running something to fix it at that spot will help prevent larger issues. Being able to isolate a change and know where the errors could come from will help solidify the change process and make a more robust implementation.
If this happens, then I have options to backout the change, and here are my steps to do that. If the change doesn't work or completely fails, I have a backup to restore and either start again, or live to try another day.
I could have applied this patch in 20 environments the exact same way, but run into issues where the code was different or parameters were slightly off, and it causes an issue, so how do I remove the patch, and what needs to be run afterwards to clean it up.
Compliance and IT processes should include test plans so you know what you need to test to validate the change as well as what you need to do to back out the change. Good backup strategies are also key here and understanding how long after the change the backups are still valid. Knowing how to put the database back to before the change would help if you have already hit that point of no return on the backups.
Implementing changes in databases can be a difficult process or it can be planned for the unexpected issues. Having test plans that hit the critical areas are important, and because of sizing and other factors, even the best test plans are not going to test everything all of the time. Being prepared that even if it is the last database for the change, something could go wrong and needing to revert the change might be inevitable. Steps created before the change, and then even testing that before applying the change in the all of the environments will elimate some of the fear of rolling out changes. Keeping the databases stable, available and productive after a change means good planning and being prepared in this area.

Tuesday, June 30, 2009

Monitoring Scripts vs. Tools

If you have been monitoring databases for awhile, you probably have a set of scripts that you have to run against the database to provide you valuable information. The scripts might tell you if a tablespace is getting full, what indexes might need to be rebuilt, if there are any errors in the alert logs and other health checks against the database. If the monitoring provides good information in a timely manor, the DBA is able to be more proactive, like adding datafiles to tablespaces before they run out, or even reacting quickly to an issue that might arise in the alert logs and contacting the application team before they have a chance to pick up the phone.
So, are monitoring scripts being replaced by tools? Tools such as HP Openview or Oracle Enterprise Manager will provide alerts and notifications about several issues as well. Just configure a couple of thresholds and away you go. But what if the configuration takes more work then the quick kornshell script? For example, monitor tablespaces and let me know when they get under 20% free, but if it is a large tablespace such as 4TB use 80GB as a threshold instead of percent. I'm sure that this can be done with tools, but still haven't figured out quite how to do it yet. Where my script has and can still provide this list very easily.
So, how do we let go of these monitoring scripts that have been around since Oracle 7? Something that we have depended on for all of these years to do our checks of the database, and use a tool to do this for us. Well, I'm sure that maintaining the scripts does take time, and learning new things is fun as well. I think that they both have a place in our environments. Setting up a tool out of the box, might even provide a quick report much faster which might have been something you wish you had.
When looking at the tools be grateful for having them, because some of these scripts were developed because the budget didn't always allow for tools in the environments. But consider what is important to monitor, consider the ease of the tool to configure and then change if needed. Let them run in parallel for a little bit to confirm the same alerts and information is being sent. Then if there are those one or two little things that the scripts have been able to do better, keep the scripts around (maybe even let a tool company know of an enhancement idea). Also, keep an eye on the tool upgrades, for new things that they monitor that you might not have thought of. Enjoy getting health checks and proactive monitoring from whatever is available to you in the environment, because isn't it really about being able to address a problem very quickly or prevent one from happening in the database anyway!

Wednesday, June 17, 2009

Something is wrong with the database

So, the emails start flying, something is wrong, the database has a problem. That is a very typical situation, and instead of defending the database right away, take some time to do a quick check of a couple of things.
Check number one might just be too obvious, but check the alert for errors. Validate that there is nothing goofy going on. And while you are checking out the bdump directory, a quick glance at udump for any trace files that could also be out there might show some information.
Check number two, any invalid objects or unusable indexes? Make sure that all procedures, views, triggers have a status of valid, but before recompiling, make sure you grab that last_modified date, because it might be needed later. Also, unusable indexes that might need to be rebuilt should be noted for what tables they are on and see if they are part of the issue.
Check number three, validate that statistics are up to date on indexes and tables.
And then check to make sure that there are no objects that were recently changed. Check that modified date on all of the objects. Even a modification to a data type can cause a join that was previously working to fail.
Maybe you use the checks in a different order, but with just this four, any obvious errors on the server have been found, anything that has changed has been validated and noted as changes made to the database and statistics have been checked, which can either show that this regular type of maintenance is not running or things are looking good and up to date on the datebase.
So, something wrong with the database, possibly, but now after these quick checks you can pull out more details about what they are seeing and what can be wrong. There is also supporting information if things have been changed or modified and help drilldown to more of the issue at hand.

Friday, May 29, 2009

DBA Lock Down

So, what is the sys password really needed for anyway? Not having the SYS password really going to keep a DBA out from logging into as SYSDBA or getting the job done? Well, probably not, especially if this access isn't locked down at the host level. Also, if a DBA is logged in to the host as oracle, there is probably a way to login as sysdba, either as sys or granting the access to the DBA login. Another question, DBAs do you really want to login as SYS? If it is a habit to go the host as oracle, then do a login as sysdba, isn't this just setting you up for trouble? Hopefully there is some sort of auditing in place to capture when the database is accessed as sysdba, but logging into a system with a least privilege user is always a good idea. It not only prevents accidentally doing something on the system without consciously knowing you are going to make a change and need special access, but also gives you the separation of duties from normal day monitoring to performing changes.
There are not too many times that I have needed to log in as sysdba. One example has been at creation and configuration of a new instance. Of course since it is a new instance, there is no data or users to mess up with any changes, a fairly safe way to login. Also, it was needed to restore a database and clone another. Even thinking about that it There are scripts that can be setup to stop and start as well as specific permissions granted, and then logging in as SYS seems not to be needed.
So, what is the big deal about logging in as SYS? Well, besides having all of the permissions to do anything in the database, I guess I have normally viewed it as a best practice and might even protect me from myself. But maybe I have been the only one to shutdown a wrong database. I have also found it easy to complete my job without the permissions and the few times that it is needed, there is a way to grab the password and complete the task.
Hide that password, lock it away, forget you even know about SYS, and use only the permissions needed.

Thursday, May 7, 2009

Time to apply what was learned...

Even though Collaborate 09 - IOUG Forum has come to a close this year, and in going back home I am thinking of all what can be applied back in the "real world". The amount of learning and information that is packed into such a short amount of time is incrediable. Everything from OEM tricks and tips on installing and configuring to RAC and 11g new features. Support for the current Oracle 10g database has been extended, but with all of the new features of 11g upgrades should be in the planning. Orlando was really the place to be this past week if you use any of the Oracle stack, learning about the individual pieces as well as how they all work together is really a big advantage of having this conference. Getting to know members of the IOUG and learning what they one to hear about and if the sessions that they attended were useful was also great conversations in the evenings. I did really enjoy hearing about all of the different presentations and what was good and not so good. It is amazing that you can pick up a tip to improve your backup strategy, learn how a company is using streams and then the best way to secure you database, all before noon each day. I was also able to step out of my normal database realm and learn about what Oracle is doing the content management and record management area. Then there were also sessions on SAP and Peoplesoft. So, starting planning if you are sorry you missed all of the great learning, Las Vegas, April 2010.

Wednesday, April 15, 2009

Next CPU...

So, if you are like me and having to deal with a very large environment, you probably feel like you just finished patching with January Critical Patch Update. It is April already and the April CPU was released last week. However, since we all have our plan and process in place, it is a piece of cake, right? OK, so we might not all have a complete process in place, and some of this seems that we are just constantly patching databases, but maintaining a secured environment is important.
In reviewing the release notes, there are some important patches to apply, there are new exploits on the database side. The affected components are listed in the documentation as well, allowing for focus in these areas for testing and validation and not having to worry about the other areas. This is also beneficial if when installing Oracle only components are installed that are used, the patches can still be applied, but testing would probably be made very simple at that point if there are is only one or two components that are affected.
Having a policy from the security team in place has really helped with deployment of patching. It isn't just the DBAs saying we need to patch, but overall security policy requiring us to. This has additional support for testing and getting the needed downtime windows. Overall security patching also helps for coordination of the different level of patching from OS to application layers. Exceptions are then required from any application team not able to allow the patching, which will then push back on vendors of these applications, and I believe getting them to work on developing standards around patching and security fixes. I think that this would even help with overall security posture of these systems.
So, policies, processes and patching all good things for those of us supporting these important business applications and environments.

Monday, April 13, 2009

Backup Strategies

I really should say recovery strategies instead of backup strategies. Every time I setup a new database or learn about what an application really does, in the back of my mind I am wondering if something were to happen to this database is the current recovery strategy going to work? Sure I can use RMAN and even exports to take backups of the system. I can also verify that backups run every night and the tapes are good, but is the application going to be in a state that I can recover it and is it really going to be as simple as recover database.
In moving to even a more high available system with RAC, I wonder if that because you can failover to another node backup strategies might not be considered as important. But there are so many other things that can go wrong. What if a security patch isn't applied correctly or a hotfix for the application is rolled out and results in a table are incorrect because of it? Or even better, because you and I know that there are places for ad-hoc queries in applications, and someone runs and update or changes a table structure, what is going to be the best way to recover now?
I think that the best thought out backup strategies are ones that include these thoughts and considerations. Thinking of the end result of actually recovering a database can give insight to what needs to be backed up and how frequently. Also the understanding of what pieces might be the most important and customized. In a large environment it is very difficult to implement several different strategies, but at least considering if I have RMAN, flashback and exports implemented, which one am I going to use first to recover. Can I just flashback a query or a table and how big does that flashback area really need to be to provide what I need to be able to get it back quickly. Import might take too long to run, but can I use that information in a test database to reconstruct what is needed to not have the production system down. With the high availbility can I failover quickly, or do I have a place to run a restore from RMAN in a real disaster?
So, think recovery and think what things are in place to restore a database, and if you want to even have more discussions about this, join me at Collaborate09 - IOUG Forum, which will be a great place to discuss recovery techniques as well as learn other things near and dear to Oracle technology professionals.

Thursday, March 12, 2009

Repeatable Process Worth the Effort

So, I might be stating the obivous here, but taking the time to create, develop and review a process to get a task done is always going to provide benefits, make things more efficient and produce better results. Take for instance, upgrading databases or applying patches that is something that will consistentantly be part of the life of a DBA. What if the deadline to get the upgrade done very quickly and there was a need to show results as soon as possible. So, is it showing results by developing a process, and putting together a test plan?
Isn't that some of the problems we have when faced with deadlines? We might have to upgrade a database much quicker then planned so the steps or a test plan may not be documented as needed. Then if wanting to handover the upgrade to another team member or team for patching in production, there is time wasted "guessing" what was done in the test environment because there wasn't time to at least document the steps or create the process.
Even if there are only a couple of databases this time around, there will be future upgrades and patches to be applied. A repeatable process, a plan that is documented can go a long way for current and future tasks.
With the IOUG Security Patching survey results, I have been ask recently about what it takes to get the patches out there, what are some best practices. My thought is a repeatable process. We can collect best practices on upgrades, adapt them for our environments, create test plans around the applications and other pieces of our environment, throw in a little bit of documentation and then before we know it, a repeatable process. The trick here is to setup this process the first time around while not putting the deadlines at jeopardy. Honestly it might take working more hours in a day, but not having to go through the whole effort each time will be well worth it.

Wednesday, March 4, 2009

Black Belt Attitude

I started martial arts recently, and our instructor was describing to us the how important attitude is during class and outside of class. The questions were posed do you have a "Black Belt Attitude"? Do you have a "Can Do" attitude? Black belts have a positive attitude and they can get it done no matter what it takes. So, I can look at class with the thought that I am just a white belt, there is no way I can do these things yet that he is asking, or I can be there trying every move, being enthusiastic that I am going to get it and setting my goal for the black belt.
The attitude doesn't stop with class. This is something that can easily be carried over to other parts of life, especially work.
A positive attitude at work goes along way for how things get accomplished. Taking ownership for the task at hand and to do it to the best of your abilities, setting goals to develop new skills and keep other skills and knowledge current, willing to take on new responsibilities or even ones that others don't want, these are all part of that "Black Belt" attitude.
There are tasks I don't want to do and people I may not want to deal with that pull me away from my goal of developing this attitude. There are projects being cut, people being given less incentive to do their current tasks, but this should push us even more to do what we can with what we have. Those of us who stay positive and work now maybe a little harder and smarter will be reaching that goal even sooner.
Just like I can't go from being a white belt to a black belt tomorrow this attitude also can't happen over night. There is training that is needed with in both technical and mental skills. Developing the attitude of "I can do this" and learning to maintain that good attitude is a key part to the mental area. Along with this training, focusing on a goal is helpful. My goal is to earn a black belt, learn something new and conquer a challenge. I am also not alone, so when my bad attitude surfaces there are people who can assist. It is good to have accountability for meeting goals and staying on track. Having people I can learn from and encourage is important and good attitudes are contagious. For martial arts, I have a class to go to with my girls, but for work I have IOUG, user group network. I think that this is a main reason that I have been active in the user group community and enjoy sharing and learning from others. So, I encourage you to get involved in a community to help sharpen your skills and have the accountability to do an attitude check.
Just image what would happen if we all came to work with a "Black Belt Attitude". The encouragement, positive outlook and the willingness to get things done could make projects happen that you never thought possible.

Wednesday, February 25, 2009

IOUG Security Patching Survey Report

It is great to have an opportunity through the IOUG to participate in the creation of a survey, and it is even better when, working collaboratively with Oracle, you get to see how the results of that survey are being used. So, today IOUG is releasing the results of a survey that collected information about the security practices of IOUG members around the Critical Patch Update (CPU). The survey was designed in collaboration with Oracle’s Global Product Security organization, under the leadership of Mary Ann Davidson.

There were a couple main goals for the survey. From an Oracle perspective, there was a desire to better understand customer security patching behaviors. For the IOUG, this was also important as well as providing the feedback collected back to Oracle through IOUG’s participation in Oracle’s Security Customer Advisory Council (SCAC).

The survey includes responses from 150 participants, who indicated that they are directly involved with applying CPUs and patching the Oracle environment. As initially planned, the results of the survey was presented to the Security Customer Advisory Council. IOUG’s participation to the SCAC reflects IOUG’s customer advocacy role. It provides a voice to IOUG members to provide feedback to Oracle about its product security roadmaps and assurance activities.

The survey was designed to look into security patching policies, practices around the application of the patches, their importance to Oracle users, and was intended to identify factors that would contribute to easing the application of patches. Check out the survey report on the IOUG website: http://www.ioug.org/.

What I found interesting in the results, only about 1/3 of the respondents has organizational policies requiring regular applications of the CPU. Another 1/3 need to justify the patch, and the last 1/3 has no policy to apply Oracle security patches (or other vendors’).

The CPU is generally considered to be important to maintain a proper security posture, and 55% of the respondents reported that they have applied the latest CPU or are one cycle behind. This leaves the other half several months behind (two or more CPU cycles late) or not applying the patches.

The survey then asked what factors would help with timely and more consistent application of the CPUs. Responses were very consistent. According to the respondents, organizational policies are as important to CPU applications as tools or documentation to test before their deployment. Each of these answer were reported by roughly 1/3 of the respondents. (Another 16% indicated that a massive malware outbreak would “help” in getting the patches applied more consistently.)

Our database environments tend to be more complex with several different applications accessing several databases. Applying patches tends to bring the fear of what is going to break, so having organizational patching policies would help offset having to justify the patching. In addition, having documentation or tools to better be able to test changes to the environment before the actual deployment of the CPUs would help reduce the risk of outages, and possibly reduce the cost and time required to implement a security patching policy.

Again, security patches are important to the Oracle environments, and the general feedback was positive here with the concern of how to test and get proper policies in place. Such feedback is valuable to the IOUG! It allows us to come up with a prioritized list of improvements, recommendations to Oracle, and other educational outreach, which can be offered to members to help them promote better security practices with their Oracle environment.

Education to the IOUG community is being achieved through webcasts, and through the Collaborate 09 conference. There are several presentations on best practices related to securing the Oracle environment, as well as sessions specifically dedicated to the application of CPUs.

Check out more information about Collaborate 09.

From an Oracle perspective, this survey allowed them to develop initiatives to help customers with testing CPUs such as enhancements to the CPU documentation, and additional features being made available through “My Oracle Support” portal which allows customers to identify the system that needs to be patched.

Also check out Eric Maurice’s comments about the results: http://blogs.oracle.com/security

CPU Security Survey Report: http://enterprisesig.oracle.ioug.org/
Collaborate 09: http://ioug.org/collaborate09/
Previous blog and information about the objects of this survey: http://blogs.oracle.com/security/2008/07/ioug_security_survey_.html

Monday, February 23, 2009

Getting Started

Hi, as you can see from my profile, I am looking forward to writing about database best practices. I have special interests in security and database tuning, and hope that upcoming topics in these areas will be of interest. Speaking of security, there is a webcast coming up about Oracle 11g database security best practices from the IOUG Enterprise Best Practices SIG on Thursday. Check out http://www.ioug.org/, IOUG News.
So, coming soon, more information on recovery of databases, high availablity and security. I have been working on a couple of white papers for these topics and will share pieces along the way.