
In order to assure output quality, we follow the steps below and our AI does the work.
1. Clearly comprehend the question
2. Identify and categorize the type of data we need
3. Select reliable sources
4. Cleaning the data
5. Automate the cleaning
6. Normalize it
7. Continuous testing and refining
Let’s go all in!
1. Clearly comprehend the question
Working closely with our customers discussing industry specific movements and market trends in consumer behavior, we can quickly derive what questions our clients are asking themselves in order to move their brand / company forward. It all trickles down to one core question: “how do we become more relevant so we can increase our market share?”. Through desktop research, interviews and client discussions we build a backlog of questions our AI will answer.
2. Identify and categorize the type of data we need
To get an objective view of the market we need to look at different data. At My Telescope we use four types:
Perception data: what do customers say or demonstrate in sentiment towards your brand / company. Examples are social listening, surveys and media coverage etc.
Behavioral data: there is a difference between saying and doing. These are cold facts about how they act. Examples are search, geo-location, web traffic etc.
Market data: the driving forces of the market. Examples: Interest rates, economic development, education levels, weather impact etc.
Internal data: data coming directly from our clients. Examples: sales reports, operations, HR etc.
3. Select reliable sources
We only use reliable sources. We collaborate to the greatest extent directly with the data provider and companies of whom we vetted data gathering methodology or that are already commonly accepted by the market (such as Google). Examples:
Perception data: Google news, YouGov, Twitter, Instagram, Facebook, Reddit etc.
Behavioral data: geo-location, Google Search, Google Trends, Similar web etc.
Market data: World Bank, UNESCO, World Health Organization etc.
Internal data: sales figures, employee satisfaction etc.
4. Cleaning the data
Especially for perception data there can be a lot of noise, which our AI cleans out. This is also true for the other data sets. These are the main data sets we feed our AI with:
Perception data: industry specific queries that can be refined per company to make sure we use the right data that is filtered by advanced sentiment analysis machine learning modules we developed. Looking also at demographics, country etc.
Behavioral data: our AI track the right users at right timings and in the right flows.
Market data: assuring the data is not too old to be used in the analysis and still relevant for the question our AI is trying to answer.
Internal data: in many cases, we need to help our clients to structure their own data in the right way to maximize the amount of data points for analysis. More data equals more accurate answers.
5. Automate the cleaning
We continuously try to automate as much as we can, using machine learning components and natural language processing to make sure the data is clean and up to date.
6. Normalize it
“Life is a fruit salad”, so yes, sometimes our AI have compare apples with pears. Yet we need to ensure the fruit salad tastes well. In other words, the AI levels the data out against other data points in the combination of sets used to ensure its usable and comparable within the algorithms and correlations the AI is running.
7. Continuous testing and refining
We continuously keep testing our data and refining our models. We do this by using our machine learning to update queries and find cleaner data from the very start, but also by introducing new data sources where it makes sense to dig deeper into the matter at hand.


Get to know us
Rodrigo Pozo Graviz, Founder & CEO
Fréderique Pirenne, Founder & CTO
Maria Wessel, COO
Peter von Satzger, Head of Communications
Email us

