Best Practices – Lazy or Strategy

I rarely get nostalgic and think of the “good ole times”, but perhaps this is one of those times.  We at Enkitec still joke about the constant use of the term “Best Practices” as the attempts of vendors to sell more product.  While we began to hear the term more and more, it always seemed to be a fun discussion when we talked to engineers and friends from Oracle about installing an engineered system.  These discussions typically ended with our providing our experience and execution to our friends for consideration.  However, lately the ballad of “Best Practices” has left the engineering discussion and moved to consultants.

The latest incident occurred a few weeks ago at a customer location.  The vendor’s pre-sales consultants engaged us, and the customer, in a 4 hour discussion about the installation of four engineered systems.  This discussion focused around the physical installation; however, did not discuss the application requirements, system requirements or the customer’s infrastructure abilities.  After the four hour meeting, these “engineers” left and generated a 13 page document citing “Best Practices” recommendations.  What was missing from this document was things like “customer / application requirements”, physical data points, application observances.  During the discussion with these consultants, they could not provide solution benefits or experiences, they simply stated “Best Practices” as the answer to each question.  I left the meeting desperately wanting those four hours back.

Now, as I ponder this, I lament … “Are the days of actually talking with a customer and defining the best solution for the customer’s situation … gone? “

The jokes around the Enkitec office circled around the laziness of installers, but I am starting to believe the use of “Best Practices” is more of a strategy than plain laziness.

Are “Best Practices” necessary?

I may be alone here, but I believe they are necessary.  As a performance engineer for a vendor, I participated in many TPC and AIM benchmarks.  Those benchmarks provided a decent baseline for performance in a controlled environment.  I think the same is true for best practices.  The concept for best practices identifies a baseline of a perfect system or application in a perfect configuration installed by an engineer that did not have anything else to do. As we all know, this is rarely the case within a customer’s infrastructure and application environment.  However, the customer can evaluate the solution and the comments at the end of the best practices documentation.  These comments provide the pros and cons of each solution.  So yes, best practices are necessary, but they are not an excuse.  Full disclosure is required.

Is it Lazy?

As I mentioned earlier, we used to joke about the use of best practices.  We thought, at the time, that the individual citing best practices were simply using someone else’s work as a reference.  Unless the consultant could provide the full disclosure associated with the best practices comment, we generally knew two things: 1) the consultant has probably never installed the solution and 2) the consultant has probably never experienced the solution in  the wild.  They could stand behind the work of someone else and claim “Best Practices” without having to provide an adequate defense.

So, an inexperienced consultant could provide a “solution” without 1) providing a defense, 2) collecting or gathering data and 3) performing physical analysis of data.  Then, to beat everything else, they also relinquish any responsibility for the “solution”.  In the past, yes, we would call that lazy.  But now, I think it has become a strategy of the consulting firm.

Strategy to level the playing field?

As I sat in my Georgia Tech MBA class on Global Product strategy today, I started wondering if it was not a smart strategy.  I began to think, how can a vendor that does not have a history of experienced consulting compete against experienced consulting firms?  Utilizing consultants with less than 5 years of operational experience to deliver a sound solution is challenging.  However, if you provide the consultant an “equalizer”, such as “Best Practices” in every document, then it is easier to sell the “solution” as a sound solution for the customer.  Therefore, the vendor no longer requires a solid staff of experienced engineers, it simply needs to define a generic solution and socialize the solution as a Best Practices solution.

Are Vendor Best Practices real?

The solution is real in most cases and in most cases, a good idea.  I believe some one sat in a lab and performed the processes defined in a best practices document.  I am also sure that the solution, if performed correctly, provides the benefits as indicated.  But, are they really best practices?  After all, one primary characteristic of the “Best Practices” definition is the term – Widely Accepted – which means that the solution is widely accepted by the community.  However, most vendors publish best practices at the same time a solution is published, which challenges the “widely accepted” requirement of defining a best practice.  So, as we implement systems and solutions for customers, we should be wary of the term “Best Practices” as it comes from the specific vendors – as they may be accepted by the vendor, but not widely accepted by the community.

Why be wary?

With respect to citing “Best Practices” as the only way to go, is this a bad thing?  As indicated above, there is a tendency to call new technology a type of best practices, although it is not widely used or accepted from the community.  Also, utilizing the mantra of best practices, it validates consultants that may not ultimately understand the technology or the use of the technology.  The inexperienced consultant will cite “Best Practices” as the reason for implementing a solution, regardless of the benefit or detriment to the client.  Therefore, as with anything else, we have to do our homework to make sure a solution is widely accepted  and is in the best interest of the customer.

It’s all about the customer

Why do we care about the socialization of best practices?  Because, in the end, we end up having to rescue customers from the latest “Best Practice”.  Most customers don’t have the luxury of a test lab and some don’t have the skilled resources dedicated to the solution.  Most customer resources play the role of utility player, knowing how to support an assortment of products at a high level.

How do we approach a customer to know when to implement the best practices stated by these vendors?  As consultants, we should do as we have always done:

  • Listen and understand: Listen to the customer and understand what their team can implement and support.  Just because a best practice is written, does not mean that it will fit in a customer’s environment.  Our role as consultants produces the expectation to provide the best solution for the customer’s environment.


  • Understand the technology: Don’t recommend a product because it is the latest technology.  Recommend the product because it support’s the customer’s requirements and provides flexibility.  Sometimes a best practice uniquely leverages a vendor’s product, which limits the flexibility for growth or integration  with other products.


  • Read the fine print: Most best practices come with multiple implementation options – just as most technologies.  Although rarely stated by consultants, probably because they don’t understand, these options come with benefits or deficiencies.  Some of these indicate the solution is complicated to implement or support.  The issue may include a costly implementation due to licensing.

As I step away from my computer, I will maintain the traditions of most experienced consultants.  I will continue to evaluate technology in terms of how it helps customers.  Technology is a tool for us to use to meet our requirements and provide us benefit.  Too often, technology sold to a customer becomes an entrapment into a vendor or solution and becomes a cage.

I guess, as I look at it, the above 3 bullets become the “Best Practices” for consultants.  Remember, success is gauged by a successful customer implementation, not a technology implementation.  There have been many successful technology implementations that served little purpose.


Exadata X4 Unveiled

I had the opportunity to install one of the first Exadata X4-2 frames prior to Oracle’s announcement, which occurred on December 11, 2013.  The Exadata X4 improves on the already popular Exadata brand, while proving the scalability and flexibility of the Exadata platform.

The installation also introduces new versions of software.  The new software includes software for the configuration tool, installation and storage cell.  This blog post addresses the changes associated with the configuration tool and the installation process.  I am sure that we will be posting more blogs with respect to the storage cell software changes as we continue to test in our lab.

Configuration Tool

The new configuration tool represents a completely re-vamped configuration tool.  The new configuration tool supports the old platforms, including Windows and Linux.  However, they also include support for the Mac OS platform.  As a Mac user, I am very happy for this addition, which makes it easy for preparing customer installation documents.

Apart from the fact that the configuration tool now supports Linux, Windows and MacOS, there are multiple changes within the tool and the output of the tool.   I will post another blog supporting the complete changes to the new configuration tool.

With respect to the X4 implementation, the changes to the output of the configuration tool represent a “no-nonsense” approach to the new tool.  The following bullet points outline the new output.

  •  <cluster>.xml – The actual configuration file that supports the onecommand process.
  •  <cluster> – Reads the “xml” file and performs the checkip process that was a pre-requisite of the old installation process.  This file was added with the December version of the installation files and was not originally planned.
  • <cluster>-InstallationTemplate.html – represents a new layout of the installation template from before.  The new layout includes a new table identifying most of the required information in a much smaller file.  Although useful, the new layout leaves some detail out.  I believe Oracle is still adjusting this information.
  • <cluster>-preconf_rack_0.csv – Represents the “preconf.csv” file from before, which is used during the “apply config” procedure.  This file supports the definition of the IP addresses for all the Exadata machines.

First Look

So, as expected, the visual inspection of the new Exadata X4 does not reveal anything different from the Exadata X3 frame.  The quarter rack X4 looks the same as the quarter rack X3.  However, the new half and full rack X4 will be different than the standard X3 rack of the same configuration.  With the X4 standard configurations for the half and full rack, the Infiniband Spine switch is not included as before.  However, upon detailed inspection of each component, the changes are visible.

Compute Node

The compute node details represent the changes to the configuration of the compute nodes.  These changes include larger local disks, new processor class and core count as well as changes in the memory configuration.

The storage processor output reveals the new frame type, the machine identifier is removed for customer anonymity.


Internally, the review of the processors reveals forty-eight entries like the one listed below.


The forty-eight entries represent two processors with twelve cores for a total of twenty-four cores.  Each core is dual threaded, which provides the forty-eight count.

At the memory level, the Exadata X4 provides a minimum of 256GB of RAM, which is expandable to 512GB of RAM.  The display below represents the customer’s minimum configuration of 256GB of RAM.


The rest of the compute node configuration remains consistent with the X3 implementation.

Storage Cell

The storage cell configuration includes the same number of processors, but additional memory and different disk configuration options.  Also, the X4 includes the new storage server software, as indicated in the below diagram of the imageinfo command.


As indicated in the following diagram, Oracle increased the amount of memory for the storage cells from 64GB RAM to 96GB RAM.


The capacity of the storage cell components increase as well.  These components include the flash memory and the physical disk.  The Exadata X4 storage cell disk options include either a 1.2 TB 10,000 RPM disk high performance drive or a 4 TB 7,200 RPM disk high capacity drive.  The associated diagram represents the customer’s choice of the High Capacity selection with the 4TB drive.


The flash component includes four F80 PCIe cards, each with 4 200GB flash modules as presented in the below capture.


The following diagram represents the “list cell physicaldisk” presenting the 12 physical disks and the four F80 flash cards with four independent flash modules each.   The total amount of flash by cell is now 3.2 TB of flash cache.


Configuration Process

The physical implementation process contains the same steps for the hardware configuration.  However, the expansion of the local drives and a modification to the “” represent a change in the duration of the pre-configuration.

In the past configurations, the reclaim disk process would run in the background and would run in about one hour.  At the end of the disk reclaim process, the nodes would re-boot.

However, the new reclaim disk process forces the reboot of each associated node and then executes the disk rebuild before network services are available.  The only way to monitor the reclaim disk (or access the system) is through the console.  The new reclaim disk process takes approximately three hours, as indicated in the below capture of the console display.


As indicated above, the new size of the local drives (600GB) contributes to the new duration of the disk reclamation process.

At the end of the disk reclamation process, the compute nodes reboot and the Exadata frame is ready for the IP assignment through the “” process.  At the completion of the “” process, the configuration moves from the hardware procedure to the software configuration process and the “OneCommand” initiation.

OneCommand Procedure

With the Exadata X4 implementation, the software configuration includes a new “OneCommand” process.  This process includes fewer steps than the previous Exadata frame process, but these steps include a consolidation of the old steps.

The following diagram represents the new set of steps for the installation process.


The following sections outline a few notes about the above steps.

The first thing that becomes evident is the “missing” /opt/oracle.SupportTools/onecommand directory.  In previous versions of delivered Exadata frames, the “onecommand” directory would contain a version of the onecommand scripts.  Generally, we would replace this directory with a copy of the latest onecommand scripts downloaded from MOS.

The new implementation implies a direct correlation between the latest MOS version and the new configuration script.  This correlation also challenges the old “image” process that some installers utilize, as the image may change with each patch update.

Location for Implementation Files

After the new onecommand directory is populated, the preparation step consists of loading the “required” files for the installation.  These required files include the Oracle distribution files, the latest patch files and a version of OPatch placed in a staging directory.  With the new configuration process, the new staging directory is now /opt/oracle.SupportTools/onecommand/WorkDir.

The second step in the configuration process validates these files are placed in the correct location.

The other change, with respect to the installation process, consists of the execution of the “OneCommand” process.  The new configuration process requires the execution of the install script, the identification of the configuration script and the step.  The following command executes the “list step” process from the install command.

# cd /opt/oracle.SupportTools/onecommand

# ./ –cf <cluster>.xml –l

The execution of specific steps include the following command.

# ./ –cf <cluster>.xml –s <step #>

The log files supporting each step are now located in the following location:


With future blogs, I will review the configuration process for the new Exadata environment.

Database Engineering

Data design for the real world

Frits Hoogland Weblog

IT Technology; Oracle, linux, TCP/IP and other stuff I find interesting

Kerry Osborne's Oracle Blog

Just another site

Martins Blog

Trying to explain complex things in simple terms