Min menu

Pages

News

Top 20 Reasons People Misinterpret Marketing Data and Reports

Here are the top reasons why most people misinterpret analytics data and reports;

1. Lack of context

Different people interpret the same data/graphs differently. It all depends on the context in which they analyze the data and how they interpret it.

A better understanding of the context will help you interpret the data more accurately.

Context will make it easier for you to interpret analytics data accurately.

Only by gaining a good understanding of your business can you truly understand the context in which you need to analyze and interpret analytical data.

Related: Check the indexing of your URLs in GoogleSheet

It is important to understand the context in order to get the most out of your analyses. You risk wasting a lot of money on pointless scans.

Optimists who don't understand their business understand less context in analyzing and interpreting data. They suffer from directional difficulties.

Directional difficulties are the inability to go in the right direction at the right time. It is the inability to determine:

  • How to collect and analyze data, and when.
  • What data should be reviewed?
  • What data should be neglected.
  • Where to look in analytics reports.
  • Turn business goals into tangible goals.

Just because you have access to data doesn't mean you should automatically analyze it. The cornerstone of any successful analysis is “going in the right direction”.

The direction your analysis takes will determine the direction of your marketing campaigns and ultimately how your business evolves to get the best return.

To find the right direction for your analytics data, you need to understand the context.

To understand the context, you need to have a good understanding of your business and industry, target market, competition, and business goals.

If you don't have a deep understanding or context to analyze and interpret analytics reports, then you're already headed in the wrong direction. This is how you will almost always get suboptimal results for your business.

2. Not understanding the intent

If you don't understand the intent behind data, information, visualizations, or interpretations, you won't be able to recognize cognitive biases (such as confirmation bias, selection bias, and attribution bias) in data interpretation/reporting.

What is the difference between 7-day and 28-day click attribution windows?

Or why marketing reports are dominated. What is their intention?

3. Attribution bias

Attribution bias occurs when you make an assumption, judgment, or decision based on a very limited amount of information, context, and intent.

Jumping to conclusions, making quick recommendations, judging a book cover, fighting for something you don't understand are all examples.

Ask questions, and you will be able to overcome attribution bias.

4. Not understanding statistical significance

Statistical significance means “statistical significance”.

A statistically significant result is a result that is unlikely to be the result of chance. While the statistically significant result is unlikely to have happened by chance, the statistically insignificant results are more likely to have happened by chance.

The term “statistical significance” is often used in conversion optimization, and especially in A/B testing.

If the A/B test result is not acceptable, it is not statistically significant. Not every increase in A/B test results translates to an increase in sales.

It is easy to draw conclusions from a small sample when the statistical significance is not well understood.

Marketers who don't understand the importance and meaning of statistics in marketing are almost all failures in paid advertising.

Here's how most conversations go in the marketing world:

“We launched the campaign 3 days ago. It has resulted in at most 30 clicks, 3,000 views, $90 in ad spend so far, and no sales. It does not work. We cannot afford a CPA of 90 euros. This campaign must be stopped.

Here are the mistakes the marketer makes:

  • It only takes 3 days to evaluate the performance of a marketing campaign and then draw conclusions. You need at least 7 days of data.
  • The marketing manager has decided to pause the campaign which is most likely still in the learning phase. A campaign in the learning phase is unlikely not to produce the best results. It's like throwing away half-cooked food because it's not tasty. You must first cook the food properly before you taste it.
  • A CTR of 1% means that there are 30 clicks for 3000 impressions. It's not considered bad in marketing. Before you can measure the success of a campaign, you need to be able to identify your industry benchmarks.
  • An e-commerce conversion rate needs to be 3.33% to get one sale out of thirty clicks. Is this possible for your website, niche or industry? Are you able to achieve similar conversion rates through paid advertising?
  • Is a CPA of 90 euros high? First of all, when an adset is in the learning phase, it tends to have a high CPA. Second, do you expect your advertising to be profitable right from the start, even though your campaign hasn't even left the learning phase?

Your CPA could drop if your campaign is run long enough. As your ad pixel has more data to find the audience most likely to convert, it's likely to drop. But, until that happens, you'll likely have a negative ROI.

Marketing statistics are statistically insignificant. 30 ad clicks, 3000 impressions and 3 days lead time are just a few examples.

“Campaign a generated 10 conversions but campaign B only generated three, so campaign a performs better”. The small size of this sample does not allow us to draw such conclusions.
But marketers keep tweaking campaigns based on 50 click-throughs or 2 conversions, 1000 impressions, etc.

5. Simpson's paradox

Simpson's paradox occurs when a trend that appears in several different groups of data disappears or reverses when the groups are combined.

It happens when you can't see the forest for the tree. This is when you are unable to see the whole  picture .

Simpson's paradox also occurs when there is a lack of multidisciplinary approaches to solving business problems.

You can't make unexpected connections between different disciplines (like marketing or analytics) and all of your recommendations seem biased and siloed. This makes them less useful.

6. Causal reductionism (over-simplification of causes)

Causal reductionism is a way of reducing an outcome to a single, clear cause. Example: X causes Y. Therefore, X is the only cause of Y.

You can also see other examples

“Our sales have never increased despite months of postings on LinkedIn”.

But what if you didn't come up with a good offer, never started posts on Linkedin, or your posts weren't in line with your business goals.

“Our sales have increased by 20% since we started advertising on Facebook”.
But what if the 20% increase in sales is mainly due to the 10% increase in organic search traffic and a 2% drop in paid search traffic?

“Campaign A generated 100 orders. Campaign A is therefore the only cause of these 100 orders.
But what if Campaign A got sales support from Campaign C and Campaign D?

When you assume that there is only one and only one cause for an outcome, you fall into “causal minimalism” (the fallacy that there is only one cause).

This false assumption makes it difficult to understand customers' true buying patterns and attribution of conversions.

Analytics reports are not what they say. They are what your interpretation is. There are almost always several possible causes for an end result.

All problems with attribution can be traced to the failure of causal reductionism.

7. The Dunning Kruger Effect

This is a type of cognitive bias whereby people believe themselves to be smarter or more capable than they actually are.

This translates into not asking questions or not asking enough, making a lot of assumptions, jumping to conclusions and speaking in absolute terms, as well as bearing the full burden of proof.

There is a problem with many people who have had a lot going on in their professional life and who consider themselves experts in this field.

By asking too many questions, they undermine their professional abilities and come across as ignorant. Either they don't ask enough questions or they don't want to ask any.

But here is the truth. It is impossible to be so certain of another person's affairs. You don't know what's going on in their business.

You don't have to be 100% certain that the recommendations you make will work.

8. The streetlight effect (or the drunkard-seeking principle).

This is a type of observational bias that allows you to get your information from where it is most convenient to look.

For example,

  • Only look at the first 10 results of your analysis reports.
  • Always watch a specific set of dashboards
  • Not enough data segmentation
  • Do not rely on reports or models.

Misinterpreting data and analytical reports is one of the main reasons people make mistakes.

The lamppost effect can cause you to forget details. Over time, your campaign and website performance may become shallow.

If you're used to always looking at the same set of reports or dashboards, it's time for a change.

Take a look at different sets of measurements. Segment data in a different way. This is how you will discover hidden information.

9. Confirmation bias

Confirmation bias is when you actively look for information and patterns in data to confirm your existing theories or beliefs.

You may begin to see a pattern or pattern in the data, test a hypothesis and find it to be true, or place too much emphasis on data that confirms your beliefs.

10. Selection bias

We talk about selection bias when you choose a sample that does not represent all the data. It can lead to erroneous conclusions.

Selection bias often leads to false sales increases by skewing A/B test results.

11. Observation bias

Observational bias refers to the tendency to take things at face value. This often leads to observation errors.

Campaign A has a conversion rate of 5%, while campaign B has a conversion rate of 10%.
rate.

Based on your observations, it is possible to assume that the conversion rate of campaign “B” is the highest. But it is quite possible that the conversion rate of campaign B is not statistically significant.

12. Cumulative error

Cumulative error is an error that increases with the size of the data sample revealing it.

So, one wrong conclusion can lead to several wrong conclusions, which could ultimately make your whole analysis wrong.

Here is an example of a cumulative error:

  • You submitted a sales performance report to your manager. But the sales figure in this report is incorrect.
  • Your manager presented your report to the board thinking it was accurate.
  • The board used this report to make a business decision. All members of your company were then invited to give their opinion.
  • As the decision was based on an erroneous report, any employee of your company who makes this decision will increase the severity of the error.

A small mistake can quickly lead to a catastrophic failure for the whole company.

13. Rely on anecdotal evidence only

It is possible to rely on anecdotal evidence in future decisions and research. This can lead to faulty testing, misinterpretation or misallocation of resources, and even financial loss.

For example,

Suppose you can double your client's conversion rate by changing their CTA buttons to red.

Based on this anecdotal evidence, you could therefore draw the conclusion that changing the CTA color to red is good practice and is most likely to lead to an increase in conversion rate.

It is important to remember that one or a few (possibly isolated) examples cannot be taken as definitive proof of a larger thesis.

14. Not understanding the Pareto principle 80/20 rule

Pareto says 80% comes from 20% of the data. This means that only 20% of the data you have is enough to produce the result you want. The rest is junk.

Optimizers often focus on analyzing 20% ​​of the data. And this happens because they are unable to conduct a highly focused and meaningful analysis from the start.

15. The Woozle Effect

The Woozle effect is proof by citation. This is when a claim is frequently quoted without supporting evidence and is therefore believed to be true/factual over time.

For example,

A blog called “A” claimed that a high bounce rate is bad news for your website, but without citing any evidence.

Ten other blogs have claimed the same over time citing blog “A”. You will eventually find that the industry accepts the claim of a high bounce rate as a bad sign.

If your reports show a high bounce rate, you might conclude that it's not good.

Please be aware that the data presented is not factual. Do your own research and draw your own conclusions.

Don't rely on case studies to interpret data or to make business and marketing decisions.

16. The fallacy of correlation and causation

Two things may seem related, but that doesn't necessarily mean they are irrefutably related.

For example,

Your client: “Our website traffic dropped 50% last week.” We also moved from e-commerce to improvement in the same week.

Now, if you come to the following conclusion, you are likely falling prey to the correlation and causation fallacy:

“Their site traffic dropped 50% last Wednesday because they switched to enhanced e-commerce tracking.”

If you really want to find a relationship between two events/variables, you can. However, the mere presence of a relationship between two variables does not imply that one causes the other.

In other words, it does not imply causation.

17. Overreliance on Analytics Reports

Analytics reports shouldn't be the first thing you look at.

I often ask my clients, for example: “Where do most of your clients live?”

This question can easily be answered in Google Analytics.

But I keep asking this question because I'm not sure if the GA report I'm looking at gives me accurate insight. There may be a problem in data collection or data sampling, which skews the analysis data.

I want my client to have the same understanding as my analytical report. This allows me to quickly detect anomalies and data.

If my client tells me that their top selling products are “X” but my e-commerce report tells me that their top selling products are “Y”, then my client or my analytics data is wrong.

Either way, I have to do some detective work.

One of the biggest problems with trying to figure everything out on our own is that we tend to make more assumptions about the issues our customers are facing.

We then form assumptions around them and often fail spectacularly.

Asking the people who run a business a lot of questions is the best way to gain a solid understanding of it.

You must understand that questions relating to the business can never be answered accurately by anyone other than the people who actually run it.

You need to understand that data analysis and A/B testing cannot replace the knowledge your client has acquired over years of running a successful business.

What's the point of spending hours and days looking for information and ideas that are already known to someone in your organization?

Your time would be better spent finding answers to questions no one can answer.
But for this you need to know the questions that have already been answered.

18. Not being aware of activities/events that may have a significant impact on your data.

Every activity, news, event or change that could have a significant impact on your data, daily or weekly, must be taken into account.

These changes may include, but are not limited to:

  • A major overhaul of the site.
  • Launch of a new product, or a promotional campaign.
  • Discontinuation or modification of a product/process/campaign.
  • A significant change in management or company policies, marketing budgets, and/or processes.
  • Any changes to your digital analytics account, such as adding or subtracting filters, adding or removing views, etc. ).
  • Any change in the way data is collected, analyzed and integrated.
  • A marked change in consumer behavior.
  • A significant change in the competitive landscape (such as the entry of a strong and significant competitor).
  • Any positive or negative news about your business and your competitors.
  • Changes in the economy, market conditions, etc. all of which can affect your data.

By being in the loop, you can be notified of these changes.

Being in the loop means being aware of and in touch with what is happening around you, your organization or that of your client.

Being “in the loop” means being present in all areas of your organization and that of your client.

This means that you need to be present at every meeting of the marketing team, every meeting of the board of directors and every email in which important decisions are made.

your :

  • Company
  • Site(s) web
  • Promotional campaign(s)
  • Business processes
  • Budgets de marketing
  • Collection and integration of current data.

19. Failing to maintain an up-to-date database of material changes that may significantly affect your data

You should maintain a database of all changes that have a significant impact on your data, every day.

Consider the following example:

  • Note the date and time you installed the Facebook code on your website.
  • Note the date, time and duration of any server downtime if the website was in maintenance mode.
  • Write down the day and time you started or stopped a marketing campaign.
  • Keep track of all changes made to your site.
  • Notify your marketing team of any changes to the marketing budget, or any change in how data is currently collected or integrated.
  • Note any changes that could have an immediate or significant impact on your data.

20. Not understanding the math and statistics behind web analytics and conversion optimization

Analyzing data without a basic understanding of math and statistics will most likely lead to wrong conclusions and loss of money.

Using statistics and math correctly is one of the best ways to interpret data.

Learning statistics and mathematics allows you to sharpen your critical and logical thinking. It will make you a better marketer as well as an analyst.

Knowledge of mathematics and statistics will allow you to interpret data accurately and spot anomalies quickly.

For example, if you see that the reported conversion rate is made up of a very small sample of data, you instantly know that it cannot be statistically significant (i.e. it is statistically significant) .

Google Analytics reports contain many averages. It is easy to misinterpret averages if your knowledge of averages is not complete.

GA reports can give you poor or below average insight.

For example, the ratio is one of the most misunderstood measures.

Many optimizers are unaware that conversion rates can negatively correlate with sales and profits because they lack statistical skills.

They believe conversions have a positive correlation with conversion rate (i.e. they believe conversion rate always correlates with sales, so sales go up and profit goes up as well. This is not always true.

Using bad analytical data will only make the situation worse.

Here are some questions to consider:

  • Q1. When your website's conversion rate drops from 10% to 12%. Is it a 2% increase in conversion rate or a 20% increase in conversion rate?
  • Q2. If the average time spent on your website is ten minutes. Does this mean that visitors spent ten minutes on average browsing your website?
  • Q3. If campaign A's conversion rate is 1% and campaign B's is 5%, does this mean that campaign B is performing better than campaign A?

The corporate world is not very forgiving of errors made by optimizers. If we report that

Our whole analysis is now called into question by the jump in the conversion rate from 10% to 12%.

Instantly, we cast a shadow over the rest of our analysis. The first thought that will immediately come to the recipient's mind will be.

Many web analysts are unsure of the role of statistics or mathematics in the world of web analytics. Many people are unaware of the role of statistics and math in conversion optimization.

Comments