Thursday, December 31, 2009

(Upgrade + Interfaces) = (Potato Chips + Bubble Gum)

Well, we have been live on R12.1 for a full month now. We made it through our first month end and now we are days away from closing out the year. While no upgrade is void of significant issues I would have to say ours has played out better than I could have hoped. Really, from an operations standpoint, the only major issues we had were getting our custom interfaces to work properly (which brings me to the topic of this post).

My new personal opinion is that (Upgrade + Interfaces) = (Potato Chips + Bubble Gum). What I mean is that the upgrade by itself is a very good thing and our custom interfaces by themselves really are superb but when you combine them together its like chewing bubble gum while eating potato chips, they just don't go well together.

We have three major interfaces that we maintain, all on the operations side. Two of the three interfaces were created to link a third party solution into oracle applications. In those two cases we were only able to test that information was put into and taken out of the interface tables the same way it was before the migration. As much as you try and replicate these interfaces without being able to test the process all the way through from beginning to end it becomes very difficult to verify that your solution will process correctly post upgrade. To complicate matters even more, our two third parties didn't have test instances available in which we could validate our code pre migraiton.

Here we are now, four weeks later and we are hoping that we've gotten the last 'glitch' resolved with our interfaces. It's been a hectic and very busy month essentially 'testing' our updated code in our production instance but we might finally be on our way to wrapping things up.

The lesson learned: things are sure easier when you don't have custom interfaces. To me it drives home the idea that its better to have a single fully integrated ERP system then it is to have a multiple systems trying to interface with each other. Granted, the price is very different between the cost of those two solutions unless you add in the cost of hiring developers to maintain and constantly monitor your code. For me, I'm taking the integrated system any day of the week.

Thursday, November 12, 2009

The Calm Before the Storm

We are officially two weeks away from taking our production instance down and migrating from 11.5.10 to 12.1. You'd think I'd be swamped with problems and issues from every which department, but as of today the only thing I have to work on is documentation (training and validation), and rewriting reports. --Sadly our discoverer reports aren't going to migrate so well since they were built on OPM inventory views.

My mind is going about 2,000 mph trying to see if I've missed any testing or any possible business process that might be affected by the upgrade but so far things look good. I wish I had a little more 'stress free' free time so I could update this blog with things I've learned during the upgrade but I think those entries are going to have to come later.

I will however make a small list of things to consider during your upgrade

  • Discoverer reports/workbooks don't migrate, at least that is what Oracle consulting told us. If you are using discoverer before the migration you're going to have to recreate your workbooks after the migration (assuming those workbooks are built on views that get changed during upgrade)
  • In OPM users are forced to choose an existing lot number when performing move immediates and adjust immediates. In 12.1 users are not forced to choose existing lots. They can enter a new lot any time (subinventory transfers, account alias transactions, inter-org transfers)
  • This one is random but in 11.5.10 we had routings setup which had resources with a usage of 0. In 12.1 if you have a primary resource assigned to a routing and that primary resource has a usage of 0 then when you run collections, the Recipe which uses that routing will not be collected and it will not show demand for that product/raw materials

Those are just a few thoughts, hopefully they might help you out during your upgrade. Well, I have more reports to rewrite, wish me luck.

Thursday, October 1, 2009

How MAC data and mappings gets migrated into the SLA

If you've read any of my previous posts you know that I'm by no means an expert on OPM Cost Accounting. However, for the last two weeks the majority of my time has been spent on becoming an wanna-be expert in this area. Hopefully I can break down what I think I've learned in a way that's understandable and that doesn't reveal my limited accounting knowledge.

In 11.5.10 OPM customers use the MAC to define how transactions and events will be mapped to the subledger and then on to the general ledger. This mapping can be very generic (when sub-event 'X' occurs use account number 'Y') or it can be very specific (when sub-event 'X' occurs in Whse 'A' with a reason code of 'B' and account title 'D' and a GL Item Category of 'C' then use account 'Y'). The question, then, is how does this mapping get migrated into R12?

--Please understand that the following opinion and explanation is based on my personal experience which means it might not be 100% correct--

In order to explain how MAC data gets migrated into SLA I first should explain just a titch about how the SLA works in comparison to the MAC. In MAC you have events, subevents, account titles, and account mapping attributes which you use to define account mappings. These same things exist in R12 they are just re-named so:

  • event = event entity
  • sub-event = event class
  • account titles = journal line types
  • account mapping attributes = sources
  • account mapping = account derivation rules


In release 12 your bottom level is your journal line types and your account derivation rules. A new mapping tool is introduced in 12 and it's called Journal Line Definitions. You use journal line definitions as the place were you select what account derivation rule you want to use to for each journal line type per event class.

Lets take the event class Miscellaneous Transaction as an example. This event class has two journal line types assigned to it, 'INV' (inventory valuation) and 'IVA' (inventory adjustment expense). With a journal line definition you are basically saying when event class miscellaneous transaction occurs for journal line type 'INV' use accounting derivation rule 'X'. Journal line types exist across multiple event classes so the journal line definition allows you use different derivation rules per event class. So, for the event class Batch Material Transactions, I can associate account derivation rule 'A' to journal line type 'INV' through the journal line definition and.


The journal line type will migrate without any problem because it is essentially just a copy of your existing account titles in 11.5.10. Account derivation rules is where OPM users are going to affected. Account derivation rules are created for every unique combination of: account title, set of books, and account segment. My company has a six segment accounting flexfield in R12 after migration I'll haves six account derivation rules per journal line type.

Our accounting flexfield is XX-XX-XXXX-XX-XXX-XXX so my accounting derivation rule (ADR) for journal line type 'INV' will be like this:

  • ADR1 XX
  • ADR2 XX
  • ADR3 XXXX
  • ADR4 XX
  • ADR5 XXX
  • ADR6 XXX


So the one mapping rule in 11.5.10 comes across as 6 different rules in Release 12. Obviously you don't need 6 different rules to define every journal line type every time. But, this post is about how things migrate so I'll have to go over that another time.

I'm a little short on time right now but hopefully this helps explain a little on how the data and account mapping that existed in 11.5.10 gets mapped over the release 12 during migration.

Monday, September 21, 2009

OPM and SLA

This week's task is tryng to get my hands completely around how to switch from using the OPM MAC to the Oracle SLA module. I hoped that since we were upgrading from 11 to 12 that maybe our business logic would migrate over for cost accounting but unfortunately that wasn't quite the case (some of the logic did migrate but most of it did not).

Now comes the fun part of trying to get the SLA up and running in a way that will jive with our cost accounting department. While there's plenty of documentation out there on how to implement SLA, there isn't quite as much about what needs to be done in order for SLA to function properly with OPM. However, today I did find a jewel on metalink that helps quite a bit, its Doc ID 822303.1. This doc goes through a step by step setup process, including screen shots, of how to get the SLA correctly configured to function with OPM.

The other helpful document I found last week dealing with this topic is a case study done by Satyam Computer Services which outlines a migration from MAC to SLA in R12 for an OPM company. You can also find this on metalink under Doc Id 562601.1.

Hopefully by the end of this week I'll be an expert on OPM and SLA.

Friday, September 11, 2009

EAM Migration to 12.1

Found out yesterday that there are some specific pre-migration setup steps that need to be completed in order for EAM assets to migrate correctly. Apprently EAM gets integrated with Oracle Installed Base in release 12 and for that integration to work you have to have your Install Base parameters set up appropriately. It would have been nice of Oracle to include this in their upgrade documentation (Oracle Applications Upgrade Guide Release 11i to 12.1.1 Part No, E14010-01) but as Sept 11, 2009 this step is not included.

From what I understand a script runs during patch 6678700 called eamsnupd.sql which inserts the eam asset number into the csi_item_instance but if you do not have the Install Base parameters setup then when the sql tries to fetch an internal_party_id it fails to get any records from the csi_install_parameters.

Its interesting that they included this setup information in the EAM Release Notes for 12.0.6 but not for 12.1.1. So far I've only been able to find two metalink documents that explain this install base setup(437577.1 and 748070.1 this second doc being the release notes for 12.0.6).

So, just want to make you aware, if you are upgrading from 11i to 12.1 and you have EAM installed but aren't currently using Install Base then you need to setup the Install Base parameters before you migrate so that EAM asset numbers can be inserted into the item instance table.

Here is what the Oracle Release Notes say you need to do to setup Installed Base parameters:

Beginning with Release 12.0, Oracle Enterprise Asset Management is integrated with Oracle Installed Base. Therefore, the Installed Base parameters must be set up to ensure that assets are created correctly in eAM.

We added a new section to the Setting Up chapter in the Oracle Enterprise Asset Management User's Guide relating to the installation parameters for Installed Base. These parameters must be set up in order to create asset correctly. This issue is related to Note 437577.1.
Setting Up Installed Base Parameters


You must perform the following steps in Oracle Installed Base:

Navigate to the Install Base Administrator responsibility.
Under the Setups menu, click the Install Parameters link.
Set up the Install Parameters for Installed Base. Additional Information: See Oracle Installed Base Implementation Guide for assistance on how to set up the Installed Base parameters. See "Set Up Installation Parameters", Setup Steps within Oracle Installed Base, Oracle Installed Base Implementation Guide.

Make sure that the Freeze checkbox has been selected. If it is unchecked, then select the checkbox.

Save your work.'
(Oracle Metalink Doc Id 748070.1)


Updated Sept 14th: Found another link on metalink, Doc Id 885895.1. This one seems to be the most detailed of the three docs.

Tuesday, September 1, 2009

OPM R12: Migrating Warehouses and Plants

One of the benefits of upgrading to release 12 is that you have the opportunity to 'restructure' your organization setup just a little bit (assuming you are an OPM customer). When you initially setup your OPM wareshouses, you did so by entering a new HR organization but in the name you put 3 to 4 characters and then a ':' to designate this HR organization as a warehouse. When you saved this new HR Organization it created an inventory organization on the discrete side as well as an OPM warehouse.

In the oracle documentation (OPM Migration Reference Guide) it states that you can convert these warehouses you've already setup into subinventories under an existing inventory organization. This is superb because the migration is already converting OPM plants into inventory organizations (if there is a resource warehouse code defined for the OPM Plant then no new inventory organization will be created). So by being able to migrate a warehouse as a subinventory you are reducing the number of inventory organizations you'll have to deal.
There are a few restrictions with this functionality (and unfortunately for me, my company could not migrate as we desired because of these restrictions). These restrictions are, per oracle documentation:

*All the warehouses that are bing mapped as subinventories under a single organization must belong to the same cost warehouse

*Warehouse can only be mapped as subinventories to the OPM Organization to which they belong (you can't migrate a warehouse belonging to plant A as a subinventory under plant B)

Where we ran into issues was that we had each warehouse setup as its own cost warehouse (not sure why, that decision predated my time) and we wanted to map warehouses from other plants all under a single plant. Overall the ability to map warehouses to migrate as subinventories is a good feature, unfortunately we were not able to use it.

If you have any questions or are curious to see our specific example let me know by email or through comments and I can show you how our situation is playing out.

Tuesday, July 28, 2009

The Challenge of Completing/Fixing Someone Elses Work

Over the past nine months my major goal at work has been to manage our upgrade from 11.5.10.2 to 12.1. My time and attention is primarily focused on this upgrade, however, since this project includes so many other departments and users (DBA's and our Hardware team specifically) I find that I have spans of three to four days of downtime waiting for someone else to finish their part of the migration. During these waiting days I've gone through some of our older project plans to verify that things are functioning as they were originally outlined and I've found quite a few projects that got partway completed and then abandoned. After reviewing the projects and trying to resurrect them, I've decided that trying to complete someone else's work is infuriating.

Lets take our implementation of Daily Business Intelligence as an example. Two years ago a co-worker began to roll out daily business intelligence to our purchasing and payables departments. Shortly after he began this project he decided to take another job outside the company. Since he left not a soul has touched the DBI project he began.

I became aware of this project a couple months ago and so during my downtime with R12 I decided to see what needed to happen to finish the DBI implementation and rollout. Without going into too many details, here is what I've learned (actually the better term would be re-learned) from trying to resurrect and complete this project.

  1. Documentation is absolutely critical
  2. Open communication is a 'must'
  3. Projects need more than one team member
  4. Oracle documentation is weak (and I'm probably understating that point)

A quick explanation of each point above

1. Documentation: My co-worker outlined the project and began working on it, however he did not keep track of what steps he completed nor how the overall project changed during his discovery and initial rollout. Because he omitted completed steps and project changes I had to essentially walk through oracle documentation step by step in conjunction with a DBA to see how many steps he completed before I was able to get an idea of what needed to be done to finish the implementation. I also had to meet with the departmentes involved to verify that the original project outline still met their needs (which it did not).

2. Open communication: Not just between departments but within departments. If there had been more communication among our own department someone would have been able to pick up the pieces of this project much sooner than two years later.

3. Project Team: By including other people as project team members we would have been able to avoid losing as much information as we did when my co-worker left because other people would have been meeting with and discussing the issues they encountered while working together on the project.

4. Oracle Documentation: Trying to get each module's DBI up and running with accurate data has been somewhat of a challenge. When running a data load some of the processes fail with error but when researching the error on the internet or on metalink I am only able to find three or four links where someone received the same error, however their error was related to performing transactions in the system and not to a data load in DBI. I've also studied the DBI implementation guide and the DBI users guide to try and troubleshoot our issues but once again there is hardly any info that explains errors we receive or issues we encounter. I do understand that trying to provide thorough documentation for each and every module in the E-Business Suite has to be a gargantuan project and I do appreciate what documentation is available via the implemenation and users guide but it always seems to be that the troubleshooting guide is the document that oracle forgets to provide (maybe their in colusion with the consulting industry).

Hopefully going forward, as I work on and implement different projects, I'll remember to follow my own advice so others won't run in to the same issues I'm dealing with if they end up needing to finish a project that I started and left incomplete.

Monday, July 20, 2009

Make To Order for OPM

I'm loving this. Found this blog post via Oracle Mix. Finally, process manufacturing has the option of using MTO to drive actual production batches. Now if I could just get our company to import sales orders from our legacy order entry system so that we could use MTO to actually produce according to real sales orders. One step at a time though, guess we ought to get our migration to 12.1.1 finished before I get too carried away.

Tuesday, July 14, 2009

Oracle Pedigree and Serialization Manager

Today I sat in on a demo of the pedigree and serialization manager which is an Oracle product in development. Regulations being developed in Europe, Florida and California are the impetus behind this product. Pedigree and serialization manager is designed to reduce counterfeiting, primarily in the pharmaceutical industry, by having the manufacturer serialize each sellable unit and then tracking that unit through the supply chain via electronic pedigree until it is sold to an end consumer. From a consumer safety point of view I’m okay with validating that the prescription I’m filling is actually made by Pfizer or Merck instead of El Casa de Merck but from an operations and supply chain point of view I have two major questions: 1) How do you implement such an extensive tracking system without dramatically increasing the cost to the consumer and 2) What agency, company, or government is going to regulate and store all of this data?

Now let me get back to Oracle’s product. During the demo the presenter said that once a serial number is placed on a sellable unit, every transaction that happens against that unit will be stored electronically in the form of a pedigree. That pedigree will be passed from manufacturer, to wholesaler, to distributor, to pharmacy so that when a consumer purchases a product they can validate the origins of their medication. It’s a great idea, one which the state of California is looking to enforce by 2015 but how is each company going to pass this data back and forth in a meaningful manner. Sure, you can create a PDF and have a server forward a collection of files as you pass the drug from place to place but doesn’t that seem like one of those ‘red tape’ situations that bog down a supply chain?

One thought would be to have all companies use the same application so that data passed would be compatible for each specific system. It seems like Oracle is trying to go down this road by basing this product on their Fusion tech stack which is supposed to easily integrate with E-business Suite, JD Edwards, PeopleSoft, and even SAP. However it still seems like passing this much information for each sellable unit is going drastically affect database performance.

Europe is headed in a different direction. Apparently they plan on having a manufacturer serialize units and upload their production information to some central database, then when a consumer purchases a drug the pharmacy will be linked to that same database and it will verify that the drug being sold is a valid product. I like this idea a little better because doesn’t bog down each system within the supply chain with gobs of information. The only drawback here is how and who administers the centralized database.

Oracles Pedigree and Serialization Manager has capabilities to comply with either of the regulations stated above, however, until these regulations are strictly enforced I don’t see too many companies looking into implementing this product. Currently these regulations are being focused toward the pharmaceutical industry but sometime around 2000 the FDA began a ‘track and trace’ which is defined below. Based on that statement it seems like everyone within the food and drug industry ought to start preparing to implement some sort of pedigree/serialization. It will be interesting to see; how companies handle this regulation, how much it will increase costs, and how much of those cost increases will trickle down to the consumer.

‘The ability to trace products both forwards and backwards is critical for protecting consumers. FDA has formed an internal multi-Center group to meet with external entities (such as industry, consumers, and foreign governments) to better understand the universe of track and trace systems that are currently in use or are being developed. FDA is currently reaching out to various organizations to gain a better understanding of best practices for traceability and the use of electronic track and trace technologies to more rapidly and precisely track the origin and destination of contaminated foods, feed, and ingredients. FDA will use the information to develop the key attributes for a successful track and trace system. In addition, FDA plans to issue a Request for Applications to provide funding to six states to establish Rapid Response Teams to investigate multi-state outbreaks of foodborne illness.’

David Acheson, M.D., F.R.C.P.
Associate Commissioner for Foods
Food and Drug Administration

June 12, 2008