Sunday, 22 June 2025

Why Students Should Think Twice Before Overusing AI Tools in College

In recent years, I’ve noticed a growing trend: many students and fresh graduates are heavily relying on AI tools during their college years. While I’m a strong believer in the power of large language models (LLMs) — for code generation, documentation, testing, deployment, infrastructure support, and more — I want to explain why you should not become overly dependent on them during your learning journey.

1. College Is for Learning, Not Just Finishing Tasks

Most college assignments and projects have been done countless times before. So why do professors still ask you to do them?

Because these exercises are not about the final output — they’re about the thinking process. They’re designed to help you build a deep understanding of computer science fundamentals. When you shortcut that process by asking an AI to do the thinking for you, you miss the real purpose: learning how to solve problems yourself.

There are public repositories where you can copy solutions and make your projects run instantly. But that’s not the point — your goal in college is not to finish, it’s to understand.

2. If AI Can Do Your Job, Why Would a Company Hire You?

If your only skill is knowing how to prompt AI tools, you’re making yourself easy to replace.

I’ve seen many people ace online assessments — solving problems involving dynamic programming, binary search, graph theory, and more — only to struggle with the basics during on-site interviews. They couldn’t analyze the complexity of a simple nested loop or explain how to choose between two sorting algorithms.

Overusing AI creates a false sense of competence. If you constantly rely on it to get things done, what happens when you face a challenge in real life — one that requires your own reasoning?

3. LLMs Aren’t Always Reliable for Complex or In-Depth Work

Despite all the hype, AI tools are not always accurate.

LLMs can give different answers to the same question depending on how it’s phrased. They sometimes produce code with compile errors or hallucinate incorrect explanations. Unless you understand the underlying concept, you won’t be able to judge whether the AI’s response is correct — and that’s risky.

AI should assist your thinking, not replace it.

4. Don’t Treat Private Code Like It’s Public

A major concern when using public AI tools is data leakage. Once you paste your code, tasks, or documentation into an online AI model, you have no real control over where that information ends up. Future users asking similar questions might get your proprietary logic as part of their output.

I saw this firsthand with an intern we were onboarding. After being assigned a task (with no pressure or deadline), he immediately started pasting a large portion of our internal code and task descriptions into GPT. He took the AI’s response, submitted it as a pull request — and didn’t even test it.

When I asked him about a specific line in the code, he had no idea what it did. I told him clearly: do not upload internal code, models, documents — anything — to GPT. If you need help or more time, just ask. You’re here to learn, not to impress us with how fast you can finish something.

Unfortunately, he kept doing the same thing. Eventually, our manager had to send out a formal email reminding everyone not to share internal content with public AI tools. Whether it was because of this intern or others, the message was clear: this isn’t acceptable. Yet he still relied on GPT for everything, and we all agreed — he had become someone who couldn’t write a line of code without help.


Final Thoughts

AI is a powerful tool — no doubt. But if you rely on it too early and too heavily, especially during your formative learning years, you’re sabotaging your own growth. Use it to assist you, not to bypass the learning process. Learn the foundations first. Think independently. Struggle, fail, and get better.

You’ll thank yourself later — when you're the one solving real problems, not just prompting AI to do it for you.

For example: this post was mainly written by me. I used AI to review it, then I reviewed the AI’s suggestions and made further improvements. That’s how you should be using these tools — not as a crutch, but as a sounding board to help you grow.

Sunday, 1 June 2025

Why Alarms Feel Broken (and How to Fix Them)

I love talking about common myths in software engineering, and here’s the first one: alarms.

The purpose of alarms is simple — visibility without manual checks. Instead of fetching data, the system pushes alerts when something's wrong. It sounds great, right? So why do alarms often feel like a nightmare?

Let’s break it down.

The Manager's View vs The On-Call Engineer's Reality

From a management perspective, more alarms = more safety. They want visibility over every metric to avoid any incident slipping through the cracks. If two metrics signal the same issue, they often prefer two separate alarms — just to be extra safe.

But from the on-call engineer’s perspective, this turns into chaos. Alarms with no clear action, duplicated alerts for the same issue, and false positives just create noise. Nobody wants to be woken up at 3 AM for something that doesn’t need immediate attention.

The core problem? Neither side feels the pain of the other.

  • Higher-level managers may not have been on-call in 10–20 years — or ever. A dozen P0 alerts a day? Not their problem.

  • Junior engineers on-call may not grasp the full system overview. If it doesn't trigger an alarm, they assume it's fine — which isn’t always true.

So, How Do We Fix It?

Balancing these two viewpoints is the responsibility of senior engineers and mid-level managers. They’re the bridge between hands-on pain and high-level priorities.

Let’s be real: execs won’t care about reducing alarm noise unless it affects a KPI. So change has to start lower down.

Tips to Improve Your Alarm System

  1. Define Clear Priority Levels

    If everything is a P0, your system isn't production-ready. Aim for at least three levels:

    • Level 0 (P0): Needs immediate action (e.g., business-critical outage).

    • Level 1 (P1): Important but can wait a few hours.

    • Level 2 (P2): Can wait days without impact.

    Within each level, use FIFO. If someone asks you to drop a P0 to work on a "more important" P0, your priorities are misaligned.

  2. Align Alarms with Business Impact

    A true P0 should reflect measurable business loss — like a bug letting users use services for free.

    A crash affecting 10 users out of 30 million? That’s a P2. It’s annoying, sure, but it’s not urgent.

  3. Set Realistic Expectations for Each Priority Level

    Use volume thresholds per environment:

    • Prod: Max 1 P0/week, 1 P1/day. The rest should be P2+.

    • This helps you track the system’s health over time.

  4. Treat Long Fixes as Tasks, Not Alerts

    If a "bug fix" takes the entire on-call week, it's not a bug — it's a feature request or tech debt task. Don’t let it sit in your incident queue.

The goal is to build a system where alarms are actionable, meaningful, and matched to business priorities — not just noise that trains people to ignore real problems.

Let's stop treating alerts as a checklist and start treating them as a tool for clarity and control.

Why Students Should Think Twice Before Overusing AI Tools in College

In recent years, I’ve noticed a growing trend: many students and fresh graduates are heavily relying on AI tools during their college years....