AI

AI

Advanced Time Intelligence in DAX with Performance in Mind | Towards Data Science

We all know the usual Time Intelligence function based on years, quarters, months, and days. But sometimes, we need to perform more exotic timer intelligence calculations. But we should not forget to consider performance while programming the measures.  Introduction  There are many Dax functions in Power BI for Time Intelligence Measures.  The most common are:  You can find a comprehensive list of Time Intelligence functions here: Time Intelligence – DAX Guide. These functions cover the most

Read More »
AI

Multimodal Search Engine Agents Powered by BLIP-2 and Gemini | Towards Data Science

This post was co-authored with Rafael Guedes. Introduction Traditional models can only process a single type of data, such as text, images, or tabular data. Multimodality is a trending concept in the AI research community, referring to a model’s ability to learn from multiple types of data simultaneously. This new technology (not really new, but significantly improved in the last few months) has numerous potential applications that will transform the user experience of many products.

Read More »
AI

Formulation of Feature Circuits with Sparse Autoencoders in LLM | Towards Data Science

Large Language models (LLMs) have witnessed impressive progress and these large models can do a variety of tasks, from generating human-like text to answering questions. However, understanding how these models work still remains challenging, especially due a phenomenon called superposition where features are mixed into one neuron, making it very difficult to extract human understandable representation from the original model structure. This is where methods like sparse Autoencoder appear to disentangle the features for interpretability. 

Read More »
AI

Zero Human Code: What I Learned from Forcing AI to Build (and Fix) Its Own Code for 27 Straight Days | Towards Data Science

27 days, 1,700+ commits, 99,9% AI generated code The narrative around AI development tools has become increasingly detached from reality. YouTube is filled with claims of building complex applications in hours using AI assistants. The truth? I spent 27 days building ObjectiveScope under a strict constraint: the AI tools would handle ALL coding, debugging, and implementation, while I acted purely as the orchestrator. This wasn’t just about building a product — it was a rigorous

Read More »
AI

Data Scientist: From School to Work, Part I | Towards Data Science

Nowadays, data science projects do not end with the proof of concept; every project has the goal of being used in production. It is important, therefore, to deliver high-quality code. I have been working as a data scientist for more than ten years and I have noticed that juniors usually have a weak level in development, which is understandable, because to be a data scientist you need to master math, statistics, algorithmics, development, and have

Read More »
AI

Accelerating scientific breakthroughs with an AI co-scientist

Acknowledgements The research described here is a joint effort between many Google Research, Google Deepmind and Google Cloud AI teams. We thank our co-authors at Fleming Initiative and Imperial College London, Houston Methodist Hospital, Sequome, and Stanford University — José R Penadés, Tiago R D Costa, Vikram Dhillon, Eeshit Dhaval Vaishnav, Byron Lee, Jacob Blum and Gary Peltz. We appreciate Subhashini Venugopalan and Yun Liu for their detailed feedback on the manuscripts described here. We

Read More »

How to Fine-Tune DistilBERT for Emotion Classification | Towards Data Science

The customer support teams were drowning with the overwhelming volume of customer inquiries at every company I’ve worked at. Have you had similar experiences? What if I told you that you could use AI to automatically identify, categorize, and even resolve the most common issues? By fine-tuning a transformer model like BERT, you can build an automated system that tags tickets by issue type and routes them to the right team. In this tutorial, I’ll

Read More »
AI

Learning How to Play Atari Games Through Deep Neural Networks | Towards Data Science

In July 1959, Arthur Samuel developed one of the first agents to play the game of checkers. What constitutes an agent that plays checkers can be best described in Samuel’s own words, “…a computer [that] can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program” [1]. The checkers’ agent tries to follow the idea of simulating every possible move

Read More »
AI

Honestly Uncertain | Towards Data Science

Ethical issues aside, should you be honest when asked how certain you are about some belief? Of course, it depends. In this blog post, you’ll learn on what. Different ways of evaluating probabilistic predictions come with dramatically different degrees of “optimal honesty”. Perhaps surprisingly, the linear function that assigns +1 to true and fully confident statements, 0 to admitted ignorance and -1 to wrong but fully confident statements incentivizes exaggerated, dishonest boldness. If you rate forecasts

Read More »
AI

How LLMs Work: Pre-Training to Post-Training, Neural Networks, Hallucinations, and Inference | Towards Data Science

With the recent explosion of interest in large language models (LLMs), they often seem almost magical. But let’s demystify them. I wanted to step back and unpack the fundamentals — breaking down how LLMs are built, trained, and fine-tuned to become the AI systems we interact with today. This two-part deep dive is something I’ve been meaning to do for a while and was also inspired by Andrej Karpathy’s widely popular 3.5-hour YouTube video, which has racked

Read More »