The Essence of AI: Why Individual-Level Data is Crucial

yellow sandy surface with ribbed pieces and stones
Photo by Laker on


In the rapidly evolving world of technology, Artificial Intelligence (AI) stands out as a beacon of transformative potential. From automating mundane tasks to making complex decisions, AI promises to revolutionize industries and redefine the way we work and live. Yet, as with any powerful tool, the key to harnessing its full potential lies in understanding its intricacies. One such intricacy, often overlooked yet fundamental, is the type of data AI requires. While many envision AI as a futuristic entity that magically produces results, the truth is far more nuanced. The essence of AI’s capability is deeply rooted in the granularity of the data it’s trained on. Through a personal encounter in Milan’s bustling investment banking sector, I was reminded of this pivotal aspect and the common misconceptions surrounding it. In this blog post, we’ll delve into the heart of AI, demystifying the importance of individual-level data and why it’s the cornerstone of any successful AI implementation.

A Real-World Encounter: The Importance of Data Granularity

On an online meeting, I found myself facing a prominent figure in the city’s esteemed investment banking sector. The meeting was charged with anticipation as we delved into a topic that has been at the forefront of many business discussions lately: Artificial Intelligence. This person, with a keen interest in leveraging AI, had a vision to develop a model that could assess the health risks of individuals during the ongoing pandemic. The pressing question on his mind was straightforward yet profound, “What data do we need to build such a model?”

My response, while simple, carried the weight of years of experience in the field: “AI models need to be trained on data with the same granularity of the input data for which they are developed. If we aim to predict outcomes for individual profiles, we require data of individuals, and preferably with the actual value of what we want to predict for as many as possible.”

The surprise was evident on his face. He had initially believed that aggregated statistics, such as means or medians, would be sufficient. This is a common misconception; while aggregated data can provide insights into general trends and patterns, and the key ingredient for computational models, it lacks the specificity required for individual predictions. It was a pivotal moment of realization, underscoring the importance of data granularity in the realm of AI. While many computational models during the pandemic utilized aggregated data to simulate overall population behavior and the spread of the virus, they weren’t designed to make reliable predictions for each individual within that population.

The Common Misconception: AI as a Sci-Fi Robot

The allure of science fiction has painted a vivid picture of AI in the minds of many. It’s easy to imagine a world where robots, powered by AI, operate autonomously, absorbing domain knowledge and then flawlessly executing tasks as a human would. This vision, while captivating, often leads to a skewed understanding of how AI truly functions.

While it’s true that domain expertise can significantly enhance AI performance, it’s not the magic wand that many believe it to be. For instance, the safety filters in ChatGPT or the use of multiple sequence alignments in AlphaFold2 are prime examples of how domain knowledge can be incorporated to improve AI systems. However, they are just one piece of a much larger puzzle.

Relying solely on domain knowledge without the right data is akin to expecting a car to run without fuel. The car might be state-of-the-art, but without fuel, it’s going nowhere. Similarly, an AI system, no matter how advanced, requires the right kind of data to function optimally. The misconception of AI as a self-sufficient, sci-fi robot can lead businesses and individuals astray, causing them to overlook the foundational importance of data in the AI equation.

The Reality of AI: Data-Driven Learning

In the vast landscape of technology, AI stands out not for its ability to think like a human, but for its capacity to learn from data. This learning process is fundamentally different from the way humans acquire knowledge. While we might learn from experiences, narratives, or abstract concepts, AI models thrive on concrete examples, and the more diverse and comprehensive these examples are, the better.

Imagine teaching a child to recognize a cat. You might show them a few pictures, describe the animal’s features, or even introduce them to a real cat. Soon, the child understands the concept of a ‘cat’. However, for an AI model to recognize a cat, it needs thousands, if not millions, of images labeled as ‘cat’. Each image serves as an example, teaching the model what characteristics define a cat.

This data-driven learning is the bedrock of AI. The richness and diversity of the examples determine the model’s effectiveness. It’s not about merely having vast amounts of data; it’s about having the right kind of data. An AI model trained on diverse examples can generalize better, making accurate predictions even in unfamiliar scenarios.

In essence, while the allure of AI often revolves around its advanced algorithms and computational prowess, its true strength lies in its ability to learn from data. Without data, even the most sophisticated AI model is like a book with blank pages – full of potential, but devoid of knowledge.

Understanding Data Granularity in AI

In the realm of data science, granularity refers to the level of detail or precision present in the data. It’s a concept that, while seemingly straightforward, holds immense significance when it comes to AI.

Consider a jigsaw puzzle with monochromatic pieces. If you’re given large pieces, you can assemble the puzzle quickly, but the resulting image might lack detail. Conversely, if you have many small pieces, assembling becomes more intricate, but the final image is much more detailed. Similarly, in AI, the granularity of the data determines the precision of the model’s predictions.

Returning to the earlier example with the investment banker from Milan, the distinction between aggregated data and individual-level data becomes clear. Aggregated data, like the average age and gender prevalence of a city, provides a broad overview on the risk of the pandemic. It’s useful for understanding general trends but falls short when making predictions about individual entities. On the other hand, individual-level data, such as the age, gender, medical tests, and DNA of each resident in a city, offers detailed insights, allowing potentially for precise predictions .

Whether the subject is a person, a car, a product, or any other entity, the granularity of the data matters immensely. If the goal is to predict a specific attribute of an item, the training data must be rich with examples of similar items with known attributes. It’s not just about quantity but about the quality and specificity of the data.

Furthermore, data can take on various forms: rows in a table, images, snippets of text, audio recordings, and more. Regardless of its form, the crucial factor is ensuring that the granularity of the data aligns with the intended application of the AI model. Only then can the model be trained effectively, ensuring accurate and reliable predictions.

The Variety of Data Formats

In the digital age, data is everywhere, and it comes in a myriad of formats. From the text messages we send to the photos we capture, from the songs we stream to the spreadsheets we maintain – each is a unique format of data. And for AI, each of these formats offers a distinct avenue for learning and application.

Imagine the vastness of the internet. It’s a treasure trove of data, with websites filled with text, social media platforms bursting with images and videos, and streaming services hosting countless audio files. Each of these data formats has its own set of characteristics and potential applications in AI.

  • Images: From medical imaging to facial recognition, images offer a visual representation of information. AI models trained on images can detect patterns, recognize entities, and even generate new images.

  • Text: Articles, tweets, clinical notes, omics data and more – text data is abundant. Natural Language Processing (NLP) models thrive on this form, enabling tasks like sentiment analysis, translation, and chatbot functionalities.

  • Audio: Voice assistants like Siri or Alexa are prime examples of AI models trained on audio data. From speech recognition to music recommendation, the applications are vast.

The beauty of AI lies in its adaptability. Regardless of the data format, what’s crucial is ensuring that the information aligns with the model’s intended purpose. The granularity, as discussed earlier, must match the application. For instance, if an AI model is designed to recognize individual voices, it needs audio samples from various individuals. If it’s meant to make a diagnose, it requires detailed data of patients having and not having that diagnose.

While the algorithms and architectures behind AI models are undoubtedly complex, their effectiveness is deeply rooted in the versatility and granularity of the data they’re trained on.


The journey into the world of Artificial Intelligence is both exhilarating and intricate. As we’ve explored, AI is not just about advanced algorithms or futuristic robots; it’s deeply rooted in the data it learns from. Whether you’re a business leader, a researcher, or an enthusiast, understanding the nuances of data granularity, the versatility of data types, and the distinctions between different AI models is paramount.

Our meeting with the investment banker from Milan and the subsequent discussions underscore a pivotal lesson: AI’s true potential is unlocked not just by the technology itself but by the quality and specificity of the data it’s trained on. Whether you’re aiming for the broad adaptability of foundation models or the precise expertise of individual-purpose models, the right data is the key.

As we stand on the cusp of an AI-driven era, it’s imperative to approach this technology with both enthusiasm and knowledge. Recognizing the importance of individual-level data, understanding the intricacies of data types, and choosing the right model for the task will be the cornerstones of success in any AI endeavor.

To those venturing into the realm of AI, remember: it’s not just about teaching machines to think; it’s about providing them with the right examples to learn from.

Call to Action: Take the Free Data Maturity Quiz

In the world of data science, understanding where you stand is the first step towards growth. Are you curious about how data-savvy your company truly is? Do you want to identify areas of improvement and gauge your organization’s data maturity level? If so, I have just the tool for you.

Introducing the Data Maturity Quiz:

  • Quick and Easy: With just 14 questions, you can complete the quiz in less than 9 minutes.
  • Comprehensive Assessment: Get a holistic view of your company’s data maturity. Understand the strengths and areas that need attention.
  • Detailed Insights: Receive a free score for each of the four essential data maturity elements. This will provide a clear picture of where your organization excels and where there’s room for growth.

Taking the leap towards becoming a truly data-driven organization requires introspection. It’s about understanding your current capabilities, recognizing areas of improvement, and then charting a path forward. This quiz is designed to provide you with those insights.

Ready to embark on this journey?
Take the Data Maturity Quiz Now!

Remember, knowledge is power. By understanding where you stand today, you can make informed decisions for a brighter, data-driven tomorrow.

Leave a Comment

Your email address will not be published. Required fields are marked *