You are currently browsing the tag archive for the ‘Oracle’ tag.
In some parts of the world, bribing government officials is still considered a normal cost of doing business. Elsewhere there has been a growing trend over the past 40 years to make it illegal for a corporation to pay bribes. In the United States, Congress passed the Foreign Corrupt Practices Act (FCPA) in 1977 in the wake of a succession of revelations of companies paying off government officials to secure arms deals or favorable tax treatment. More recently other governments have implemented anticorruption statutes. The U.K., for instance, enacted the strict Bribery Act in 2010 to replace increasingly ineffective statutes dating back to 1879. The purpose of these actions is to enable ethical and law-abiding companies to compete on a level playing field with those that are neither. A cynic might wonder about the real, functional difference between, say, Wal-Mart’s recent payments to officials in Mexico to accelerate approval of building permits and the practice in New York City of having to engage expediters to ensure timely sign-offs on construction approval documents. No matter – the latter is legal (it’s a domestic issue, after all) while the former is not.
Moreover, the U.S. Department of Justice (DOJ) and the Securities and Exchange Commission (SEC) have increased their oversight of bribery. At the beginning of 2013 they jointly issued the Resource Guide to the U.S. Foreign Corrupt Practices Act. For its part, the SEC has stepped up enforcement using its own resources. Recently, it charged a group of bond traders with enabling a Venezuelan finance official to embezzle millions of dollars by disguising the money as fees paid to the broker/dealer to handle apparently legitimate transactions. Tellingly, though, there was another relatively recent bribery issue that involved Morgan Stanley where the SEC declined to include that company in an enforcement action because it had demonstrated diligence to prevent it.
Before anticorruption laws, it was expedient for corporations to pay government officials to close business, get preferred status or prevent punishment. Once the laws were established, that stopped being the case. However, from a management standpoint, compliance with the law became complicated because of the dual nature of the corporation, which is both an entity and a group of individuals. In the case of the latter, when an individual breaks the law, is that person at fault, is the corporation or are both? Regardless of how a case is decided, there can be severe reputational damage to a company found violating the law, and that will have repercussions for corporate boards and executives.
This question leads to the agency dilemma, an important consideration in enterprise risk management. Economists long ago recognized the agency dilemma when the modern corporation separated the roles of its principals (that is, the shareholders) from its management. The agency issue exists where the best interests of the principals are either not aligned or in conflict with the interests of the agents (the professional managers running the corporation). But agency issues also extend to the company’s executives and may be rife in any large-scale business. Within the management group, authority to act independently is delegated down through the hierarchy, and the interests of the lower-level managers may be in conflict with those of senior executives, the board of directors and shareholders. For example, suppose that a local manager believes his performance evaluation, compensation and prospects for promotion hinge on the timely opening of a new facility. Confronted with a culture of payoffs for permits, that manager may try to find a way to pay officials for expedited consideration, especially if he is local to the area. From that individual’s perspective, corrupt activity may be the norm, and he may believe himself to be clever enough to violate company policy without detection.
It was once acceptable for a company to claim that it had a stated policy prohibiting bribery and that executives were ignorant of an employee’s actions. Absent proof to the contrary, that often was enough. However, the FCPA changed this norm, imposing the need for diligence and affirmative actions on the part of companies to prevent employees from breaking the law as well as to detect and report any such violations that do occur (which is how the Wal-Mart situation came to light). Public standards, too, have changed since the 1970s. Despite its self-disclosure after the fact and the steps it took to address the corrupt behavior, Wal-Mart suffered severe reputational damage. Yet even with the likelihood potential consequences, our benchmark research reveals that just 6 percent of companies have effective controls for managing reputational risk.
We assert that the most effective control is to prevent illegal activity from taking place at all. Short of that, companies that can demonstrate that they have taken all reasonable steps to prevent a violation of the law are in a better position to claim that the individual, not the company, is at fault.
An organization should have clearly articulated and documented antibribery and corruption policies and procedures, institute mandatory training of and signed acknowledgements of having taken it by executives and managers, and put in place incentives and disciplinary measures. However, these required measures are increasingly insufficient to demonstrate diligence in preventing corrupt activities. Companies also must have a software-supported internal control system that flags suspicious activity immediately and triggers a rigorous remediation process that analyzes, investigates and documents the disposition of each incident. Incidents that are detected long after their commission are more difficult to cope with and pose much higher legal, financial and reputational risk.
Software is available that helps detect activities that violate anticorruption laws and regulations as they occur or shortly thereafter; this is far more effective than waiting for internal audits or (worse still) whistleblowers to uncover malfeasance. To prevent violations of the FCPA and other antibribery statues, corporations must be able to monitor their financial and other systems for warning signs. These applications take advantage of operational intelligence, a class of analytical capabilities built on event-focused information-gathering that can uncover suspicious actions as they occur. Our research on innovating with operational intelligence shows that companies use an array of systems (led by IT systems management and major enterprise applications such as ERP and CRM) to track events, analyze them, report results and create alerts when conditions warrant them, as detailed in the related chart. The research also shows that about half (53%) use 11 or more information sources in implementing their operational intelligence efforts. In the future, effective FCPA software increasingly will need to look at a wider range of internal data as well as information from external sources and social media to determine, for example, whether a consulting company that just received a finder’s fee is run by or employs a relative of a government official. Today, companies can utilize software from large vendors such as IBM, Oracle and SAP, as well as vendors with FCPA-specific software such as Compliancy and Oversight Systems.
Bribery and corruption are unlikely to disappear entirely. Regardless of anyone’s best intentions, corporate boards and executives can find themselves enmeshed in a scandal not of their own devising. The best defense in such cases is plain evidence that the organization has done everything reasonable to prevent its occurrence and has discovered and dealt with it promptly if it does. Policies and training are vital components, but software can be the extra component necessary to improve the effectiveness of monitoring and auditing to support anticorruption efforts.
Robert Kugel – SVP Research
Our benchmark research found in business technology innovation that analytics is the most important new technology for improving their organization’s performance; they ranked big data only fifth out of six choices. This and other findings indicate that the best way for big data to contribute value to today’s organizations is to be paired with analytics. Recently, I wrote about what I call the four pillars of big data analytics on which the technology must be built. These areas are the foundation of big data and information optimization, predictive analytics, right-time analytics and the discovery and visualization of analytics. These components gave me a framework for looking at Teradata’s approach to big data analytics during the company’s analyst conference last week in La Jolla, Calif.
The essence of big data is to handle the variety of data but also optimize the information used by the business for whatever type of needs as my colleague has identified as a key value of these investments. Data diversity presents a challenge to most enterprise data warehouse architectures. Teradata has been dealing with large, complex sets of data for years, but today’s different data types are forcing new modes of processing in enterprise data warehouses Teradata is addressing this issue by focusing on a workload-specific architecture that aligns with MapReduce, statistics and SQL. Its Unified Data Architecture (UDA) incorporates the Hortonworks Hadoop distribution, the Aster Data platform and Teradata’s stalwart RDBMS EDW. The Big Data Analytics appliance won our annual innovation award in 2012. The system is connected through Infiniband and accesses Hadoop’s metadata layer directly through Hcatalog. Bringing these pieces together represents the type of holistic thinking that is critical for handling big data analytics; at the same time there are some costs as the system includes two MapReduce processing environments. For more on the UDA architecture, read my previous post on Teradata as well as my colleague Mark Smith’s piece.
Predictive analytics is another foundational piece of big data analytics and one of the top priorities in organizations according to our big data research is not available in 41 percent of organizations today. Teradata is addressing it in a number of ways and at the conference Stephen Brobst, Teradata’s CTO, likened big data analytics to a high-school chemistry classroom that has a chemical closet from which you pull out the chemicals needed to perform an experiment in a separate work area. In this analogy, Hadoop and the RDBMS EDW are the chemical closet, and Aster Data provides the sandbox where the experiment is conducted. With mulitple algorithms currently written into the platform and many more promised over the coming months, this sandbox provides a promising big data lab environment. The approach is SQL-centric and as such has its pros and cons. The obvious advantage is that SQL is a declarative language that is easier to learn than procedural languages, and an established skills base exists within most organizations. The disadvantage is that SQL is not the native tongue of many business analysts and statisticians. While it may be easy to call a function within the context of the SQL statement, the same person who can write the statement may not know when and where to call the function. One way for Teradata to expediently address this need is through its existing partnerships with companies like Alteryx, which I wrote about recently. Alteryx provides a user-friendly analytical workflow environment and is establishing a solid presence on the business side of the house. Teradata already works with predictive analytics providers like SAS but should further expand with companies like Revolution Analytics that I assessed that are using R technology to support a new generation of tools.
Teradata is exploiting its advantage with algorithms such as nPath, which shows the path that a customer has taken to a particular outcome such as buying or not buying. According to our big data benchmark research, being able to conduct what-if analysis and predictive analytics are the two most desired capabilities not currently available with big data, as the chart shows. The algorithms that Teradata is building into Aster help address this challenge, but despite customer case studies shown at the conference, Teradata did not clearly demonstrate how this type of algorithm and others seamlessly integrate to address the overall customer experience or other business challenges. While presenters verbalized it in terms of improving churn and fraud models, and we can imagine how the handoffs might occur, the presentations were more technical in nature. As Teradata gains traction with these types of analytical approaches, it will behoove the company to show not just how the algorithm and SQL works but how it works in the use by business and analysts who are not as technical savvy.
Another key principle behind big data analytics is timeliness of the analytics. Given the nature of business intelligence and traditional EDW architectures, until now timeliness of analytics has been associated with how quickly queries run. This has been a strength of the Teradata MPP share-nothing architecture, but other appliance architectures, such as those of Netezza and Greenplum, now challenge Teradata’s hegemony in this area. Furthermore, trends in big data make the situation more complex. In particular, with very large data sets, many analytical environments have replaced the traditional row-level access with column access. Column access is a more natural way for data to be accessed for analytics since it does not have to read through an entire row of data that may not be relevant to the task at hand. At the same time, column-level access has downsides, such as the reduced speed at which you can write to the system; also, as the data set used in the analysis expands to a high number of columns, it can become less efficient than row-level access. Teradata addresses this challenge by providing both row and column access through innovative proprietary access and computation techniques.
Exploratory analytics on large, diverse data sets also has a timeliness imperative. Hadoop promises the ability to conduct iterative analysis on such data sets, which is the reason that companies store big data in the first place according to our big data benchmark research. Iterative analysis is akin to the way the human brain naturally functions, as one question naturally leads to another question. However, methods such as Hive, which allows an SQL-like method to access Hadoop data, can be very slow, sometimes taking hours to return a query. Aster enables much faster access and therefore provides a more dynamic interface for iterative analytics on big data.
Timeliness also has to do with incorporating big data in a stream-oriented environment and only 16 percent of organizations are very satisfied with timeliness of events according to our operational intelligence benchmark research. In a use case such as fraud and security, rule-based systems work with complex algorithmic functions to uncover criminal activity. While Teradata itself does not provide the streaming or complex event processing (CEP) engines, it can provide the big data analytical sandbox and algorithmic firepower necessary to supply the appropriate algorithms for these systems. Teradata partner with major players in this space already, but would be well served to further partner with CEP and other operational intelligence vendors to expand its footprint. By the way, these vendors will be covered in our upcoming Operational Intelligence Value Index, which is based on our operational intelligence benchmark research that found it important to analyze business and IT events that was found to be very important in 45 percent of organizations.
The visualization and discovery of analytics is the last foundation and here Teradata is still a work in progress. While some of the big data visualizations Aster generates show interesting charts, they lack a context to help people interpret the chart. Furthermore, the visualization is not as intuitive and requires the writing and customization of SQL statements. To be fair, most visual and discovery tools today are relationally oriented and Teradata is trying to visualize large and diverse sets of data. Furthermore, Teradata partners with companies including MicroStrategy and Tableau to provide more user-friendly interfaces. As Teradata pursues the big data analytics market, it will be important to demonstrate how it works with its partners to build a more robust and intuitive analytics workflow environment and visualization capability for the line-of-business user. Usability (63%) and functionality (49%) are the top two considerations when evaluating business intelligence systems according to our research on next-generation business intelligence.
Like other large industry technology players, Teradata is adjusting to the changes brought by business technology innovation in just the last few years. Given its highly scalable databases and data modeling – areas that still represent the heart of most company’s information architectures – Teradata has the potential to pull everything all together and leverage their current deployed base. Technologists looking at Teradata’s new and evolving capabilities will need to understand the business use cases and share these with the people in charge of such initiatives. For business users, it is important to realize that big data is more than just visualizing disparate data sets and that greater value lies in setting up an efficient back end process that applies the right architecture and tools to the right business problem.
VP and Research Director