Written by Amber Jain

AI / LLMs
Local LLMs

Understanding LLM Size, Weights, Parameters, Quantization, KV Cache & Inference Memory

How much RAM do you need to run a 30 billion parameter model? Why are there multiple versions of the same model at different file sizes? What does "8-bit quantization" actually mean, and how does it affect performance and/or precision? If you're running language models locally or planning to, understanding the relationship between parameters, weights, quantization, and memory is essential.

AI / LLMs

The Inflection Point: Why Renowned Programmers Changed Their Minds on AI Coding

As I write this today in mid of January 2026, I can't shake off this feeling. Something flipped in last few weeks. Hackernews/reddit are filled with links to posts from purist, respected programmers (that I have personally looked up to since 2007 when I started studying programming) suddenly singing praises of LLM coding. Some of these had previously written off "(Vibe/Agentic) Coding with AI" as something that was good for hobby/mini projects, code reviews or first draft.

Sectoral and Thematic Funds: When to Consider (2026)

In 2026, many Indian investors look for focused ways to participate in visible trends such as manufacturing upgrades, energy transition, or domestic consumption. Sectoral and thematic mutual funds can offer that focused exposure within the mutual fund structure, with professional research, daily NAV disclosure, and portfolio transparency.