Oracle Database Data Centre Migrations Again …

I’m far from new to migrating Oracle databases to new servers, I’ve had to do it several times over the past decade and a half. Many of these migrations have been done with minimal downtime using technologies such as: Oracle Standby Database; Quest SharePlex for Oracle and basic restores of hot and cold backups using both RMAN and OS commands and between homogeneous and heterogeneous server architectures.

So, why has it taken me a while to get the method for this latest one sorted out?

The additional complications this time:

  • The new servers are 50 miles away in a 3rd party data centre.
  • The database files and FRA (Fast Recovery Area) are stored in ASM (Automatic Storage Management).
  • There are two 600GB databases to migrate in parallel.
  • Oracle Streams is running between the two databases (single source, single target thank goodness) .
  • The total outage I have is around 9 hours.

So the basic plan is this:
Taking as read the existence of the the new Linux cluster (RHEL 5.9) complete with ASM installed and configured and Oracle homes created for the RDBMS instances.

  • Take suitable RMAN backups – compressed of course to minimise network traffic.
  • Shoot the backups down the line to the new servers.
  • Restore the databases.
  • Periodically copy over incremental backups and/or backups of archived redo logs.
  • Carry out PITR (point in time recovery) to (almost – explanation later) the same time on both databases.
  • Open the databases. Sadly this will have to be RESETLOGS which does have an impact on Streams replication.
  • Reestablish Streams replication.
  • Modify the tnsnames.ora files and configuration files with connection string details for the various applications and interfaces that touch these databases.
  • Final testing and handover.

Before I started practicing I was totally OK with all of that plan apart from two things.

  1. I had no idea how to transfer files from one ASM instance to another and didn’t even know that it was possible, but I needed to avoid creating additional file systems if possible and I needed to find a way of transferring data across the network – driving down the motorway backup media in hand not being an option.
  2. Knowing that opening a database RESETLOGS breaks the Streams replication, I needed to find out if it was reasonably possible to repair it.  (Reinstantiation from scratch would take about 24 hours and again this had to be avoided if possible).

To answer the first point, I came across the following two invaluable blog entries from John Jeffries and Talip Hakan Ozturk. Obviously I had to adapt what they contain and I created a few basic tools to help create the scripts to minimise typing errors when creating the transfer scripts.

http://johnpjeffries.wordpress.com/2009/12/07/using-rman-to-restore-a-database-to-another-server-in-an-asm-environment/
http://taliphakanozturken.wordpress.com/2012/06/23/how-to-copy-backup-files-from-an-asm-environment-to-another-asm/

The initial transfer of the full backups across the network take between 7 and 9 hours per database depending on network traffic and subsequent RMAN restores between 2 and 3 hours each depending on the number of channels.

The backups of a day’s archived redo logs transfer in less than an hour so this is more than acceptable. So the full backups will be copied over sometime during the week before cutover night and periodic transfers of archived redo log backups transferred to roll the databases forward. Fabulous!

Now to the problem about getting Streams working again after PITR and opening RESETLOGS.

I found the following helpful Oracle Documentation: http://docs.oracle.com/cd/E11882_01/server.112/e10705/man_gen_rep.htm#i1014311

John Jeffries (again), also has some good stuff in his blog, but his entries referred specifically to downstream capture, whilst we are using Combined Capture and Apply with capture at source.

Here is what I found works best in our situation:

  • On the databases in our current data-centre
    • Stop all user activity on the source system (apart from the streams heartbeat process where a scheduler job inserts into a ‘heartbeat’ table every 15 seconds).
    • Disable the capture process in the source database.
    • ALTER SYSTEM ARCHIVE LOG CURRENT in both source and target
    • Enable the capture process in the source database
    • ALTER SYSTEM ARCHIVE LOG CURRENT in both source and target
  • Take RMAN backups of the the resulting archived redo logs.
  • On the databases in the new data-centre
    • We are trying to engineer the situation where the source database is slightly older than the target, so:
      • Perform PITR on the new source database and recover to a point when the capture process is disabled.
      • Perform PITR on the new target database and recover to a point when the capture process had been reenabled (about a minute newer than the source).

Referring to the Oracle documentation in the link above, in the section:
Performing Point-in-Time Recovery on the Source in a Single-Source Environment, having the target’s highest applied SCN (determined in step 8) greater than the FIRST_SCN  (returned by the source dictionary build in step 6) made it possible to omit step 9 (I had no missing transactions I needed to apply) and step 10 – with its 12 sub-steps which frankly I had no success with (the instructions are a little vague after all).

So moving on to step 11: I didn’t like the sound of dropping the existing capture queue one little bit, but dropping the queue and recreating it as per step 12 went without a hitch – I just wish there had been an example showing the syntax.

I could completely ignore that the target database had been through PITR, from the perspective of this scenario it was irrelevant. The abridged steps I needed to perform were:

  • Step 1 – Carry PITR recovery as per documentation and note RESETLOGS SCN ( 53708867168)
  • Step 2 – Make database is open in restricted mode
  • Step 3 -Run following on source to get names of capture process and rulesets as the capture queue will need to be recreated in a step below
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN RULE_SET_OWNER HEADING 'Positive|Rule Owner' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_OWNER HEADING 'Negative|Rule Owner' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15

SELECT CAPTURE_NAME,
       RULE_SET_OWNER,
       RULE_SET_NAME,
       NEGATIVE_RULE_SET_OWNER,
       NEGATIVE_RULE_SET_NAME
   FROM DBA_CAPTURE;

Capture
Process         Positive        Positive        Negative        Negative
Name            Rule Owner      Rule Set        Rule Owner      Rule Set
--------------- --------------- --------------- --------------- ---------------
JOHNJI$CAP      STRMADMIN       RULESET$_5140   STRMADMIN       RULESET$_6399
  • Step 4 – Skip.
  • Step 5 – Capture queue should already be stopped, but check anyway.
  • Step 6 – At the source database, perform a data dictionary build:
SET SERVEROUTPUT ON
DECLARE
scn  NUMBER;
BEGIN
  DBMS_CAPTURE_ADM.BUILD(
    first_scn => scn);
  DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
END;
/
First SCN Value = 53710388735
  • Step 7 – Make sure the APPLY queues on the target are IDLE.
  • Step 8 – Determine the highest SCN for a transaction that was applied at target and verify that it is higher than both the RESETLOGS SCN (Step 1) and the First SCN (Step 6).
SELECT APPLY_NAME, HWM_MESSAGE_NUMBER
FROM V$STREAMS_APPLY_COORDINATOR;

ApplyProcess
Name                  HWM_MESSAGE_NUMBER
-------------------- -------------------
APPLY$_JOHNJI_3956           53710391425
  • Step 9 – Disable restricted session on the source.
  • Step 10 – Skip.
  • Step 11 – Drop the existing capture queue – Yes honestly this is safe.
BEGIN
  DBMS_CAPTURE_ADM.DROP_CAPTURE('johnji$cap');
END;
/
  • Step 12 – Create new capture queue setting FIRST_SCN and START_SCN to number returned in Step 6.
BEGIN
  DBMS_CAPTURE_ADM.CREATE_CAPTURE(
    QUEUE_NAME => 'strmadmin.johnji$capq',
    CAPTURE_NAME => 'johnji$cap',
    RULE_SET_NAME => 'ruleset$_5140',
    NEGATIVE_RULE_SET_NAME =>’ruleset$_6399’,
    START_SCN => 53710388735,
    FIRST_SCN => 53710388735
  );
END;
/

Step 13 – Start the new CAPTURE process and check that changes are being applied in target.

Hope that helps someone else!

Johnji

Advertisements
Posted in Database, Migration, Oracle | Tagged , , , , , , | Leave a comment

Substantial Migration of Data From One Database to Another Last Night …

… had a team of 8 backing me up, but to keep myself awake during the long periods of waiting – it took over 8 hours – I spent my time working through the Manipulating XML Data chapter of Paul Neilsen’s excellent “Microsoft SQL Server Bible 2008”. Unfortunately the downloads on his website are a bit out of date and don’t really match the material in this chapter. So I worked through it all and save all the examples to a file. I’m just trying to find somewhere to upload it to just in case any one else is interested.

Ch 18 – Manipulating XML Data

The data migration? Using the sort of analogy to which Benjamin Zephaniah is prone: we’d spent 4 months building the swimming pool – this was just a case of filling it.

Though no matter how many times I’ve worked on such projects and even designed the methodology, it still amazes me when all the applications and communications still work! It’s that amazement that prevents me from becoming complacent!

Johnji

Posted in Database, Migration, SQL Server | Leave a comment

I’ve Been Over to the Other Side!

“You’ve not updated your blog since October!”

So said Gavin Payne (http://gavinpayneuk.com), organiser of the very first SQL Server user community event I have attended: http://sqlserverintheevening.com.

I had feared that I might feel exposed and alone in a room of SQL Server specialists, but not bit of it! In that IT like: “How do I start a conversion with a stranger?” kind of way, the room was warm and welcoming.

Things change. It’s no longer reasonable in my position to say: I’m Oracle, I don’t do SQL Server. So! I need exposure – or a least a little immersion!

Having not being able to get to SQLBits in Brighton (work commitments) and the cancellation of an IT Connections event because the main speaker went AWOL, this was the first available event I’ve been able to get to – and very good it was too!

The two feature presentations, both about an hour long, were on Microsoft’s database in the cloud offering: SQL Azure and upgrading SQL Server databases.

Both presentations were very good and informative, horizon broadening and confidence building. Thanks guys. Nicely pitched high-level technical presenations from which I picked up a lot more of the ‘grammatical’ differences between Oracle and SQL Server.

Anyway, why so long since my last entry? Do you want a list? No , but here’s one anyway:

  • I’m in a new role: Database Specialist in our Architecture Team, so now they call me a Technical Architect.
  • Oracle10g upgrades on HP-UX
  • Oracle11g installs on Linux
  • Oracle Grid Control 11g Install and Implementation
  • Designing a new Backup/Recovery plan for hosted databases
  • Planning SAP migration from HP-UX to Linux
  • SQL Server UNICODE support testing (Russian and Chinese).
  • Learning to work with offshore DBAs
  • And much much more … including adhoc tasks like having to do my very first point in time recovery of a SQL Server production database (before you ask it was user error – by a privileged user).

To think that 12 months ago I was complaining that I didn’t have enough to do and was contemplating what I might do next. It might have been a case of: be careful what you wish for! But, I may be tired but I’m having fun again!

Johnji

Posted in SQL Server, SQL Server in the Evening | Leave a comment

What is a DBA?

Eleven years!

That’s how long I’ve been in my current job. It’s also probably the last time I tried to define the role of the DBA – in preparation for the interview in 1999. Now I’ve been tasked with writing a paper to define the role of the DBA and how in this globalised world we can best structure things to the maximum benefit of the company.

Now I’m convinced that the senior management believes that the DBA occupies an infrastructure role.  But hang on! My induction into IT was via application development, so I’m one of those that believes DBAing occupies one of the rare grey areas in IT. It’s neither one thing nor the other and personally I’m more than happy to float around in this nebula that lies between the operating system and the application layers.

The only thing I can be sure of is that the discipline gets bigger and harder as time passes and in addition to being the expert on the RDBMS one has been brought up on, there is now an increasing expectation that you should have more than a casual acquaintance with one or more of the others. That is: you’re not the Oracle DBA; you’re not the SQL Server DBA; you’re not the Blah DBA.

No. You’re none of these. You’re The DBA!

So what is a DBA?

Well eleven years ago, I’d have said it was somebody who was highly skilled in the tasks of maintaining an enterprise scale database; its backup and recovery; a mentor to developers who interacted with databases; and partial system administrator.

My opinion has changed. There is just far too much to know now. So what is a DBA? What am I?

Well I think I have become: an attitude; a state of mind; a set of behaviours; a learner; an educator; an enabler; a gatekeeper; a listener; a communicator; fascinated by business, people and technology; totally trustworthy; an optimist; a pragmatist;  instead of knowing what, I strive to know about; and above all, a problem solver.

I just happen to apply these characteristics to an ongoing interest in technology, data and information.

So how to express this and the value of my wonderful team in a paper? God knows!

It would be silly in this globalised era not to take advantage of the opportunities to offload the repetitive routine tasks, but how then to express the value of what my rather talented, imaginative and creative colleagues intangibly provide? We learn to the benefit the department and company. “Yes but what do you actually do?”. If you don’t produce a tangible deliverable, then a manager cannot measure what you do! So how do you resolve that?

I’ll let you know when I’ve worked it out!

Johnji

Posted in Database, Oracle | Leave a comment

I Got a Reminder Today …

… of why I wanted to be a DBA.

I’d kind of forgotten why fifteen or sixteen years ago I had ambitions to be a DBA. Maybe that’s because role doesn’t seem to be held in as high esteem as it once was. Is that true everywhere? or is it just where I am?

The new heroes are writing e-commerce systems and fancy front ends. The database is a just another datastore. Who cares if it’s Oracle, MySQL, SQLServer or even XML files? At least that’s the message that comes across. It seems rare that the DBA is considered as someone who needs to be involved early on in projects or even if they have anything to contribute to the design or development of a system.

BUT …

Today I spent several hours in the company of two or three dozen database specialists at a UNIX SIG meeting of the UKOUG (UK Oracle User Group) at Oracle’s UK headquarters at Thames Valley Park in Reading.

Great presenters: David Phizacklea (www.joraph.com); Pete Finnigan (www.petefinnigan.com) and Jonathan Lewis (http://jonathanlewis.wordpress.com) all giving generously of themselves and reminding us all that we are a community of sorts. Being a DBA is sometimes a ‘suicide’ role – damned if you do, damned if you don’t  – as Phiz put it, but here I was amongst people who understood and what a morale booster it was!

Great company, good conversation, fantastic exchange of knowledge and ideas. Thanks Guys!

Johnji.

Posted in Oracle, UKOUG | Leave a comment

The Spanner in the Works!

The in-house written legacy application that still lies at the core of our business systems. That’s the spanner!

The trouble here is that the application is old, there’s no technical upgrade path and the middle-tier actually has Oracle7 client components linked in with it. Now we know that it still runs OK against Oracle 9.2.0.6 and that the Oracle7 client/Oracle9i server combination may even once have been certified, but we’re unlikely to see many of the benefits of replatforming if we merely move from Oracle9i on HP-UX to Oracle9i on Linux and then there are the RDBMS support issues. Do I chance an upgrade to the never certified Oracle7 client/Oracle10g combination?

Now call me old-fashioned but if I was to consider upgrading the database as part of the replatforming, then it should be upgraded on its current platform first then migrate at that version to the new OS.

I thought in for a penny, in for a pound so I might as well try going straight to Oracle11g. Easy! Not! Oracle 11.2 requires HP-UX 11.32, which is not available for our particular servers, so what about Oracle 11.1? Well no too! This requires HP-UX 11.23 and nobody is going to invest time and effort into upgrading our HP-UX 11.11 servers as they of course are ‘legacy’. So Oracle 10.2.0.4 it is, because the terminal release 10.2.0.5 isn’t available on HP-UX.

Talk about choice made for you! At least I should get the fast incremental backups I’m after.

Now will it work?

Johnji

Posted in Database, Hosting, Migration, Oracle | Leave a comment

I’ve Got Blancmange Brain Already!

There’s not much of an update this week – or at least there is not much action to report on yet. That doesn’t mean that I’m not deep in thought – perhaps too deep – hence the blancmange!

Whilst the CTO etc. discusses contract detail. I’m starting to think lots and realise the sheer enormity of the task ahead and this is only considering the production stuff!

The new data centre is going to have a lovely brand spanking  new 120 spindle dedicated SAN and lots of lovely new blade servers. We’re probably going to change our SAN usage policy to one of SAME (stripe and mirror everything). I don’t think we had it right before, we sort of carved it up into chunks – sort of simulating attached storage – creating all sorts of nasty hotspots!

One of the key things that will change is backup and recovery – and this is driven by segregation of duties: the O/S people are going to look after the O/S; the DBAs are going to look after the databases.

This means bye-bye direct-to-tape backups. Due to space constraints, we don’t do disk backups and therefore risk long recovery times. We mix-and-match RMAN and old-style file system backups, some using split-mirror technology , some not. 

I feel a desperate urge to simplify things and get everything working the same way. The GOAL: Incrementally Updated Image Copy backups (with block tracking for performance).

The O/S SAs can then put the backups to tape at their leisure – or at least not get me out of bed if the backup has failed due to tape problems.

In brief what we have to migrate (development and test systems excluded):

  • An Oracle11g OLAP database server running on Linux – straightforward apart from the interfaces and data feeds.
  • Two SAP R/3 systems with Oracle10g databases on HP-UX 11.11 (the PA-RISC variety) – likely to be migrated to Linux or Windows (not my first choice honest!) which will be overseen by SAP certified migration people, so again no real risk/problem there.
  • Our two new domestic and international e-commerce systems: Oracle10 on Linux and too young to have got complicated yet.
  • There’s a couple of e-commerce systems we support for other companies in the group with Oracle10g databases sitting on Linux – again reasonably standalone so not difficult from our perspective.
  • Then there is the spanner …

Johnji

Posted in Database, Hosting, Migration, Oracle | Leave a comment