Part 1 | Hammer Goldmine | Introduction
At Hammer we strive to find what others cannot find. Unfortunately, finding what others cannot find often takes a lot of time. That is why parallel to our regular work in the field of market intelligence we are constantly looking at ways to streamline our workflow. Our biggest and most recent project in this regard is the development of an internal data warehouse. By creating our own data warehouse, we will have more control on how to shape and label the data, making it easier to find useful information that is hidden in the vast amount of data available online nowadays.
Within the Hammer organization, ideas have been discussed for an internal database in the past. The competitive advantage is clear when you can get useful information faster than your competitors. However, maintaining and annotating a large database is a niggling and time-consuming task. The trade-off was therefore low and the ideas for such a database remained on the backlog. It was not until recent technological advances came around that the ideas for an internal data warehouse started to surface more persistently.What if we do not have to maintain this database ourselves? Can we build a database that provides us with useful information without any human interaction?
This is the starting point of Hammer Goldmine.
For Hammer Goldmine we are building an autonomous database that continuously reads in potentially useful information and assigns it to different categories relevant for the various types of projects we are (or will be) working on.
In the next few weeks, we will highlight the individual parts of the Goldmine pipeline and how we expect them to improve our workflow. The Goldmine architecture can be divided into three segments:
1. A scraper
2. A machine learning model
3. A database /search engine
In our next blog we will discuss part 2 of Hammer Goldmine, the implementation, scrapers and AI model behind the database. Stay tuned!