What is big data?
Big data defines huge amounts of data, the use of it as well as its integration in decision-making and management processes. Apart from available and prepared data such as CRM / customer data or market analyses, increasingly more external data is used, for instance for measuring and managing financial risk. On an external level, disposable data has to be distinguished from prepared data, that has already been provided to the bank. The range of collected data covers both structured data and unformatted lines, e.g. of a social media platform.
Which potentials lie in big data for risk management?
Using big data can enhance the analysis and model quality in risk management (e.g. of application and behavior scorecards) thanks to an increasingly clear definition of statistics. Thus, management incentives can be faster elaborated and interpreted, boosted by an accelerated information provision. Scenario simulations considering huge data amounts allow for an efficient realization of risk concentrations and quicker reactions to new market developments. Big data can also be used for fraud detection with the help of pattern recognition in order to identify fraud more precisely, faster and with reduced manual efforts by comparing internal and external data (for instance money laundering or credit card fraud). The new data diversity has also already been utilized by innovative start-ups (e.g. Kreditech), that have seized various opportunities to position themselves in niches of risk management or to launch joint ventures with classic financial institutions.
In addition to the advantages for risk management, the sector detects major potentials in big data in other application areas as well. This enables customized product offers with intelligent pricing by, for example, collecting web click streams and geo location data, as it has already become common practice in other sectors. New customers as well as existing customers should be better retained in that way. The application of big data in trading based on algorithms—which already represents a substantial share of daily market exchanges—is worth mentioning.
Has experience already been gathered in this area?
Numerous banks have already begun to implement big data projects. An example of implementation in risk controlling is the UOB bank from Singapore. It successfully tested a risk system based on big data, which makes the use of big data feasible with the help of in-memory technology (data storage in the memory) and reduces the calculation time of its total-bank risk (value at risk) from about 18 hours to only a few minutes. This will make it possible in future to carry out stress tests in real time and to react more quickly to new risks. Another success story with big data technology within existing business models was implemented by Morgan Stanley. The bank developed its capacities in processing big data and thus optimized its portfolio analysis in terms of size and result quality. It is expected that these processes will lead to a significant improvement of financial risk management thanks to automated pattern recognition and increased comprehensibility.
The German company Kreditech, that offers creditworthiness assessments of private individuals, is an example of a successful niche player. The focus is on location data, data of social networks, web analyses and data with reference to online purchasing behavior. Up to 10,000 data points per assessment are considered by feeding in a “big data pool”. The US company Kabbage uses a similar approach: it grants loans—based on big data—for pre-financing and provision of working capital to corporate customers. The used external data stems amongst others from sale platforms such as Amazon, delivery services and social media platforms. The company Paymint is specialized in the field of compliance. It takes action against credit card fraud based on its big data software for dynamic fraud pattern recognition. Depending on the size of credit card providers, several million transactions are analyzed on a monthly basis.
What is to be taken into account for successfully implementing big data?
zeb recommends an evolutionary implementation path in order to be able to deal with the complexity and broad topic range of big data in a structured manner. Focusing on collecting internal data is the first step. Afterwards further value-creating data sources are to be taken into account, provided that the marginal utility of integration exceeds the additional costs. However, an integrated process with test, learning and adjustment loops is indispensable, since more data does not necessarily lead to improved data quality and analyzability. Little steps and improvements are also decisive for handling the implementation of possible risks and stumbling blocks.
According to a study by Fraunhofer Institute, data protection and data security are the largest barriers for successful big data implementations, followed by budgetary restrictions, other priorities, lack of expertise and understanding of the topic. Technical obstacles, such as useless data or immature technology, are less crucial. This reinforces that potentials of big data (in risk management) can basically be realized, as long as identified hurdles are encountered with respective skills and expertise.
In particular innovative niche providers have proven that big data will be a promising topic for the financial sector. The variety of possibilities from applications in risk management for established banks hasn’t fully been tapped into yet. zeb recommends an initial potential analysis for identification, which ends in a selection of the pilot project with the best cost / benefit ratio. A manageable scope (of data) should help to gather experience and convincing results are to achieve the bank internal bye-in for integrating big data in analysis and decision-making processes. zeb recommends to deal with big data early on in order to ascertain one’s position in the dynamic competitive environment.