Hi there,
I have been working in the Hadoop framework for over 3 years now. Hadoop has changed a lot from what it was in the initial years. From a mere low cost solution for large dataset storing via HDFS and analysis via MapReduce to a myriad of applications that is constantly evolving. The most used tools in the Hadoop ecosystem now a days are Spark (in memory proccessor that is slated to be the replacement of Mapreduce), Hive (the default database for the hadoop ecosystem tools), Impala, R, pig, Hbase etc. But the actual set of big data tools depends on the exact requirement of the project and the expected outcome. I am currently working in a few big data projects in my company and can surely help you to in your project as a Big Data Architect.
In fact in the current project in my company, I am doing the exact same thing i.e. We are calculating Economic Capital i.e. Risk Capital of Insurance firms and using diffrent models for the calcualtion. The Sprak engine reads all the data from the Hive table, then does the Model calculation in its in memory spark engine and the results are written in another Hive table in the form of an OLAP Cube. Our visualization tool (Tableau) reads from this Hive Cube and dispalys the data to the end business user. We are also using Oozie to schedule workflows. Let me know if you have any questions.
Thanks,
Souvik