Sunday, 22 June 2025

Why Students Should Think Twice Before Overusing AI Tools in College

In recent years, I’ve noticed a growing trend: many students and fresh graduates are heavily relying on AI tools during their college years. While I’m a strong believer in the power of large language models (LLMs) — for code generation, documentation, testing, deployment, infrastructure support, and more — I want to explain why you should not become overly dependent on them during your learning journey.

1. College Is for Learning, Not Just Finishing Tasks

Most college assignments and projects have been done countless times before. So why do professors still ask you to do them?

Because these exercises are not about the final output — they’re about the thinking process. They’re designed to help you build a deep understanding of computer science fundamentals. When you shortcut that process by asking an AI to do the thinking for you, you miss the real purpose: learning how to solve problems yourself.

There are public repositories where you can copy solutions and make your projects run instantly. But that’s not the point — your goal in college is not to finish, it’s to understand.

2. If AI Can Do Your Job, Why Would a Company Hire You?

If your only skill is knowing how to prompt AI tools, you’re making yourself easy to replace.

I’ve seen many people ace online assessments — solving problems involving dynamic programming, binary search, graph theory, and more — only to struggle with the basics during on-site interviews. They couldn’t analyze the complexity of a simple nested loop or explain how to choose between two sorting algorithms.

Overusing AI creates a false sense of competence. If you constantly rely on it to get things done, what happens when you face a challenge in real life — one that requires your own reasoning?

3. LLMs Aren’t Always Reliable for Complex or In-Depth Work

Despite all the hype, AI tools are not always accurate.

LLMs can give different answers to the same question depending on how it’s phrased. They sometimes produce code with compile errors or hallucinate incorrect explanations. Unless you understand the underlying concept, you won’t be able to judge whether the AI’s response is correct — and that’s risky.

AI should assist your thinking, not replace it.

4. Don’t Treat Private Code Like It’s Public

A major concern when using public AI tools is data leakage. Once you paste your code, tasks, or documentation into an online AI model, you have no real control over where that information ends up. Future users asking similar questions might get your proprietary logic as part of their output.

I saw this firsthand with an intern we were onboarding. After being assigned a task (with no pressure or deadline), he immediately started pasting a large portion of our internal code and task descriptions into GPT. He took the AI’s response, submitted it as a pull request — and didn’t even test it.

When I asked him about a specific line in the code, he had no idea what it did. I told him clearly: do not upload internal code, models, documents — anything — to GPT. If you need help or more time, just ask. You’re here to learn, not to impress us with how fast you can finish something.

Unfortunately, he kept doing the same thing. Eventually, our manager had to send out a formal email reminding everyone not to share internal content with public AI tools. Whether it was because of this intern or others, the message was clear: this isn’t acceptable. Yet he still relied on GPT for everything, and we all agreed — he had become someone who couldn’t write a line of code without help.


Final Thoughts

AI is a powerful tool — no doubt. But if you rely on it too early and too heavily, especially during your formative learning years, you’re sabotaging your own growth. Use it to assist you, not to bypass the learning process. Learn the foundations first. Think independently. Struggle, fail, and get better.

You’ll thank yourself later — when you're the one solving real problems, not just prompting AI to do it for you.

For example: this post was mainly written by me. I used AI to review it, then I reviewed the AI’s suggestions and made further improvements. That’s how you should be using these tools — not as a crutch, but as a sounding board to help you grow.

Sunday, 1 June 2025

Why Alarms Feel Broken (and How to Fix Them)

I love talking about common myths in software engineering, and here’s the first one: alarms.

The purpose of alarms is simple — visibility without manual checks. Instead of fetching data, the system pushes alerts when something's wrong. It sounds great, right? So why do alarms often feel like a nightmare?

Let’s break it down.

The Manager's View vs The On-Call Engineer's Reality

From a management perspective, more alarms = more safety. They want visibility over every metric to avoid any incident slipping through the cracks. If two metrics signal the same issue, they often prefer two separate alarms — just to be extra safe.

But from the on-call engineer’s perspective, this turns into chaos. Alarms with no clear action, duplicated alerts for the same issue, and false positives just create noise. Nobody wants to be woken up at 3 AM for something that doesn’t need immediate attention.

The core problem? Neither side feels the pain of the other.

  • Higher-level managers may not have been on-call in 10–20 years — or ever. A dozen P0 alerts a day? Not their problem.

  • Junior engineers on-call may not grasp the full system overview. If it doesn't trigger an alarm, they assume it's fine — which isn’t always true.

So, How Do We Fix It?

Balancing these two viewpoints is the responsibility of senior engineers and mid-level managers. They’re the bridge between hands-on pain and high-level priorities.

Let’s be real: execs won’t care about reducing alarm noise unless it affects a KPI. So change has to start lower down.

Tips to Improve Your Alarm System

  1. Define Clear Priority Levels

    If everything is a P0, your system isn't production-ready. Aim for at least three levels:

    • Level 0 (P0): Needs immediate action (e.g., business-critical outage).

    • Level 1 (P1): Important but can wait a few hours.

    • Level 2 (P2): Can wait days without impact.

    Within each level, use FIFO. If someone asks you to drop a P0 to work on a "more important" P0, your priorities are misaligned.

  2. Align Alarms with Business Impact

    A true P0 should reflect measurable business loss — like a bug letting users use services for free.

    A crash affecting 10 users out of 30 million? That’s a P2. It’s annoying, sure, but it’s not urgent.

  3. Set Realistic Expectations for Each Priority Level

    Use volume thresholds per environment:

    • Prod: Max 1 P0/week, 1 P1/day. The rest should be P2+.

    • This helps you track the system’s health over time.

  4. Treat Long Fixes as Tasks, Not Alerts

    If a "bug fix" takes the entire on-call week, it's not a bug — it's a feature request or tech debt task. Don’t let it sit in your incident queue.

The goal is to build a system where alarms are actionable, meaningful, and matched to business priorities — not just noise that trains people to ignore real problems.

Let's stop treating alerts as a checklist and start treating them as a tool for clarity and control.

Thursday, 2 January 2025

The Power of MVP

Every groundbreaking app begins with a question: How can we turn this idea into a reality that delivers maximum impact while minimizing resources and effort? Enter the MVP—a powerful strategy that not only tests your concept in the real world but also paves the way for rapid growth and innovation. But what makes an MVP so transformative? Let’s find out.

What is MVP?

MVP stands for "minimum viable product." It's a common term in the software development industry. An MVP is defined as a product with just enough features to allow users to start using it, while deferring non-essential functionalities for later. Everything non-essential is deferred to later, provided user satisfaction remains intact.

Before diving deeper into the definition, let's understand the problem that MVP aims to solve.

Challenges MVP Seeks to Address

There are countless ideas for new products in the software development world. However, before investing resources, every idea must pass through two critical validations:

  1. Technology Validation: Can this idea be implemented? This involves determining if the software development world can deliver a solution that optimizes time, operating costs, and money while identifying potential limitations and operational expenses.

  2. Market Validation: Is there a demand for this product? This means analyzing whether customers will pay for the product, even if they need it. The rule here is: "A customer’s need for a product doesn’t always translate into a willingness to pay for it." Some users might not engage unless it fits their price expectations or operational benefits. Companies must evaluate the business model carefully.

Neither of these questions has quick answers. They require research, prototypes, and sample data for clarity. However, the challenge arises when a project incurs high costs due to months of work by engineers, server expenses, project management, design, marketing, and more—only to discover that the product isn’t valuable.

While technology validation can often be addressed through research, market validation usually requires real-world data and feedback. Most products, especially in their early stages, struggle with obtaining direct user feedback or indirect feedback through market responses. This is where the MVP approach proves invaluable by allowing teams to test ideas with minimal investment and gather essential feedback early.

MVP as a Solution

MVP focuses on reducing development and operational costs, enabling teams to validate a product’s value before committing extensive resources. That said, reducing costs excessively can shift the focus from product issues to resource issues.

One reason the MVP concept is so popular is its compatibility with agile methodologies. Agile emphasizes delivering incremental improvements in short cycles and responding to feedback quickly. As a result, MVPs are easier to integrate into workflows when teams are familiar with agile principles.

Take Uber, for example: It started with basic features and gradually evolved into a platform offering diverse ride options across many countries.

How to Use MVP?

The most critical and challenging part of an MVP is minimizing costs while keeping the product viable. There are many differing opinions on which features to prioritize and which to push for later. For example, a user-friendly UI often sparks debate. Some argue that a functional UI is sufficient and doesn’t impact the main experience. Others contend that poor UI design, particularly for new users, can significantly reduce retention rates.

Perhaps the right question here is: What are the critical factors that will make users abandon the app?

  • Define the core features of the product to make it easier to eliminate "nice-to-have" features.
  • Consider the type of target audience. For instance, certain user segments prioritize security, UI, or privacy. If they are your target users, then features addressing their priorities should take precedence.

MVP vs Demo

While they may seem similar, there is a significant difference between an MVP and a demo. A demo is an incomplete project used only to present an overview of the final product. It is normal for a demo to have bugs and optimization issues.

An MVP, on the other hand, is a complete product with very limited functionality (ideally focused on a single feature) but capable of improvement. In practice, confusing these terms can lead to unrealistic expectations, such as assuming an MVP will demonstrate all features, even if they are not functional yet.

Can MVP Be Bad?

One downside to MVPs is that they often prioritize quick results over everything else, which can lead teams to adopt bad practices. Examples include poor architecture, bad code, lack of tests, secrets stored in code or text files, databases stored in text files, and a reliance on manual deployment processes.

Many justify these choices by saying, "We know it’s wrong, but we’ll fix it later." However, this mindset can create two significant issues:

  1. Bad Practices Become Ingrained: These shortcuts often become part of the product’s core. New features may also inherit these bad practices.
  2. The Pain of Migration: Most of the time, migration never happens. Once the product “works,” no one will care about the pain caused by fixing bugs in messy, complex code—except the engineers tasked with doing so.

When Should You Avoid MVP?

Certain projects may not benefit from an MVP approach. For example, in industries where users expect a full experience from the start, launching an underdeveloped product can harm your brand.

One notable example is a dessert company in Egypt that launched an app. Despite heavy marketing investment, the app suffered from significant usability issues, including long login times, sluggish navigation, and inaccurate product availability.

Conclusion

MVP is a powerful approach when combined with an agile environment, as long as teams stick to its principles. Teams must avoid overbuilding for future needs while ensuring that the MVP approach doesn’t justify poor development practices. By focusing on quality and core functionality, companies can leverage MVP effectively without compromising their long-term success.

Saturday, 5 October 2024

Choosing Between Relational, Document, and Graph Models

Choosing the right database is one of the most critical decisions in system architecture. Whether you're dealing with structured or unstructured data, normalized or denormalized data, the choice you make will affect your system's scalability, performance, and maintainability.

This article aims to guide you through the differences between relational, document, and graph databases—highlighting when each type is most suitable, the challenges of using them together in a single system, and key factors to consider for making the best decision for your use case. We'll also explore whether it's feasible for a system to incorporate multiple database types and discuss potential pitfalls of such an approach. By the end, you'll be better equipped to select a database strategy that aligns with your business needs and technical requirements.

Choosing Between Relational and Document Databases

When choosing the right database for your system, it's important to first understand your business needs and use cases. Know the access patterns, the type of data you're storing, and how the business plans to utilize that data.

A common but overly simplistic guideline is: if you have structured data, use a relational database; if it's unstructured, use a document database. However, this approach is misleading. In reality, unstructured data can be stored in a relational database, and structured data can also be efficiently stored in a document database. The choice is less about structure and more about how the data is used and how relationships between data are managed.

Here are some key questions to help guide your decision:

  • Do you need to query on multiple fields frequently?
  • Do you often need to access full records in a single query?
  • What kinds of relationships exist between different records or tables?
  • Does your business require frequent locks and transactions?
  • Does your data have a natural grouping, or does it vary significantly from record to record?
  • How complex are the relationships between your data points?

From my experience, there's a general rule: don't use a relational database without a strong reason. Relational databases provide a lot of power, including support for locks, transactions, relationships, and constraints in a native way. While some document databases offer these features, they often come with trade-offs, like added complexity or performance penalties.

On the other hand, choosing a document database without fully understanding your access patterns could lead to challenges like:

  • Frequent Full Table Scans: Without appropriate understanding of query patterns, you may end up scanning entire collections frequently, increasing costs.
  • Data Consistency Issues: Ensuring data consistency, like unique constraints across collections, can be complex in a document database.
  • Data Duplication: To support access patterns, you might end up duplicating data across collections, leading to the headache of keeping that data in sync.

Understanding Graph Databases

Graph databases can be thought of as a specialized type of document database, but with a focus on modeling relationships. They were created to solve performance issues related to complex relationships in relational databases by storing data as a network of entities and relationships. This type of structure allows graph databases to efficiently handle use cases with a lot of interconnected data.

A graph database uses graph theory to model and perform operations on data relationships, making it an excellent choice for scenarios where relationships are central to the data model. Some natural use cases include:

  • Social Networks: Representing people and the relationships between them.
  • Fraud Detection: Identifying suspicious patterns based on connected entities.
  • Network Management: Modeling and analyzing computer networks.

While I haven’t used graph databases in practice—my knowledge is mostly theoretical—it's clear that they can significantly improve performance when dealing with complex and numerous relationships.

Can a System Use Multiple Types of Databases?

Using two different types of databases in the same system comes with several challenges.

In a microservices architecture, it is sometimes argued that if a service requires multiple databases, it could be split into two separate services, each with its own database. This kind of approach aligns with the single responsibility principle and allows each service to scale independently, using the best database for its specific needs.

However, in a monolithic system, using multiple databases can introduce complications:

  • It gives developers too much flexibility, pushing design decisions into implementation. This means developers will have to constantly make choices like "Which database should I use for this case?"—a decision that should ideally be made during design, not development.
  • It reduces isolation between the business layer and the database layer, since the business logic becomes aware of specific implementation details across multiple databases.

While I've seen systems that use multiple databases simultaneously, I've also seen ways to avoid this approach. There may be use cases where this is justifiable, although I haven't encountered or thought of them all. One potential reason for using multiple databases is cost reduction—specifically, when there is a need to lower operational costs, but the resources required to migrate to a better-architected system are not available. In such cases, maintaining an old database while integrating a new one may seem like a practical, albeit temporary, solution.

Final Advice

The decision of which database to use is not one to take lightly. It requires a deep understanding of your application's needs, the nature of your data, and how you intend to scale. Relational, document, and graph databases each have their strengths and limitations, and selecting the right one can significantly impact your system's performance and maintainability.

Migrating from one database model to another can be a time-consuming and challenging process, especially when large volumes of data are involved. It’s best to thoroughly evaluate your needs and validate your decision before committing to a database model.

Conclusion

Choosing the right database is not a one-size-fits-all decision. Each type of database has its unique strengths, and understanding your business requirements and technical constraints is key to making the right choice. It’s also crucial to understand the challenges of using multiple database types within a single system, as doing so can add unnecessary complexity and impact maintainability.

In the next articles, we'll dive deeper into some common challenges: Why does a current database have poor performance, and how can we fix it? We'll also explore the differences between popular database management tools—comparing MySQL to MS SQL, and DynamoDB to Cosmos DB.

Friday, 5 July 2024

Insights for Starting Your Career

What Fresh Software Developers Should Know Early in Their Careers

In this article, I share my perspective on what fresh software developers should understand in their early career. This viewpoint comes from my experience across several companies, and I acknowledge that not all companies have the same expectations.

Most companies outline the needed skill set in their job descriptions. So, if you receive an offer, there's no need to worry about being unqualified. Many aspects of being a good software developer, like teamwork, are difficult to evaluate in an interview. Companies also understand that you'll learn and grow over time.

Before diving into the key topics, it's important to emphasize that I mean these as practices, not just theoretical concepts. No one will ask you to define "teamwork" or a "design pattern" in your job, but they'll expect you to act based on that knowledge. The challenging part is applying these ideas when it counts—for instance, remembering to use the Factory pattern where it fits, rather than just knowing about it.

Object-Oriented Programming (OOP)

Many systems today are built using Object-Oriented Programming, and there will be even more in the future. Being familiar with OOP concepts and applying them effectively is crucial for your growth.

A helpful exercise for understanding OOP better is to draft a low-level design for a system, add a simple high-level implementation, and then evaluate: Which features make this design flawed or require refactoring? Is there duplicate code? Is the code easy for others to understand and use? Would the current design allow for 100% test coverage?

Design Patterns

For fresh developers, design patterns are perhaps the most important topic. Common patterns like Singleton or Factory are used in nearly every project, while others are less frequent. Instead of memorizing them or writing practice code just to apply them, revisit your existing projects and see how a pattern could improve things.

The way to approach design patterns is:

  1. Identify what's wrong with the current state or could go wrong in the future.
  2. Understand which design pattern can help.
  3. Figure out how to apply it.
  4. Assess if the problem is solved.

As you gain experience, you'll start to identify these patterns naturally. You shouldn't memorize them, nor should you try to reinvent them. The widely-used names and conventions are important so others can quickly understand your code.

SOLID Principles

SOLID principles are commonly used in most large projects. Not adhering to them can lead to cumbersome code, which makes adding features, fixing bugs, or conducting tests much more challenging.

Learning to identify code changes that violate these principles is crucial early on. Otherwise, your code reviews (CR) may frequently get rejected, or, worse, pass unnoticed and cause problems down the line.

Dealing with Legacy Code

Most of the time, you'll work on extending and maintaining existing code rather than building new systems from scratch. Fresh developers often can’t start a project alone because they lack experience in system architecture and design.

You’ll encounter legacy code with multiple issues while adding new features. One of the biggest mistakes is to think that fixing old code is straightforward. It's a complex process, so it’s crucial to pick your battles wisely—remember, rewriting is often the simplest answer.

Teamwork

One common pitfall is focusing solely on solo skills, often a habit developed during college by working alone or letting others do the work.

As a software developer, you'll work in a team. Teamwork means:

  • Writing code that anyone on the team can understand without asking for explanations.
  • Avoiding unnecessary changes to others' code.
  • Treating code reviews as collaborative discussions rather than simple approvals. Always ask if something isn't clear and propose alternatives where you think they're better. Reviewing other people's code helps you learn more than just reading your own work.

System Patterns

System design patterns are not expected knowledge for fresh developers, but you will be exposed to them immediately. No one will ask you to choose the right architecture as a beginner. Instead, you're expected to follow the existing patterns, and with time, contribute new projects following those patterns.

A good way to learn is to read about the patterns in your project to understand their limitations, then explore alternatives. Common system patterns include MVC, MVP, MVVM, DDD, clean architecture, and layered architecture.

Testing

Testing is a broad area, and no one expects fresh developers to master it (except in testing-focused roles). Basic knowledge of mocks, unit tests, and test coverage is enough.

Learning about testing also provides insight into other concepts like SOLID and design patterns. Sometimes, you'll understand the value of a particular approach only after trying to test different versions of the same code.

Documentation

Imagine if every SDK or API you used had no documentation—most of us would opt to write our own instead. Documentation comes in many forms beyond just a README or a PDF, but these must exist to guide users and collaborators.

Working Frameworks and Concepts (Agile, Scrum, Waterfall)

You'll encounter different delivery methods from day one—Agile, Scrum, Waterfall, etc. Although these sound complicated when you first read about them, they're simple to apply in practice. Understand the general concepts and focus on learning how your team applies them, as there are often different variations.

Conclusion

These concepts are not universally applicable in every case, but I believe they are essential for developers working in companies that value high code quality. Mastering these practices will set you on the path to becoming an effective and adaptable software developer.

Friday, 12 January 2024

Let the Docs Do the Talking

Years Ago when I was a student, I really hated anything related to the Documentation process, at that time I have a bad idea about it. In the collage days most time teams who working on the same project be like 4 people, and they are friends who having meeting and talks all the time, the projects itself was not very complex compared to real-world projects. So, documentation at that time was just paper that nobody will read. Even if someone want to know something, they will just ask the friend in front of them instead of open a poor written document and search for an answer. Maybe we were too lazy at that time to write well written document, most of the documents the students ware delivering were something like template document who you need to just fill it after coding.

Anyway, While I should admit there are things needs to be improved in the collage documentation process, but what I really dislike is the people take that attitude into the professional work.

There are many things I consider as documentation like Design document, meeting Agenda and Action items, COE, Deployment Notes, … etc. Actually, anything needed to be well-written for historical or communication purpose can be considered as document. That means even emails can be considered as document.

Why Documentation is way more important in the real-world problems?

Time Efficient

Talking about communication, it's way easier to use technical document instead of book meeting with everyone and talk to them, maybe you can do it when you have a team with ~5 members. But when the size of the team increase it will consume a lot of time, some will be on vacation, some don't really have clear context and some be busy at that time with other stuffs. So instead of forcing people to communicate with everyone you can easily send a well written document to them, it will include all context they may need. People with context will skip it when reading, while others can take their time to understand the context.

Compared to the normal chat, Document in most cases are more organized and has clearer language, also Document will avoid having many calls/pings related to any issue.

Feedback handling and Timeline Visibility  

One that I like the most is it's so easy to give feedback on a single point, highlight it for discussion, then publish the action item regarding this point, that done very smoothly during document rather than having calls. When someone added comments, you can easily see if it's fixed or not on the next version of the document.

Reference applicable

Anyone can reference to it or search it in the future, this help a lot in the timeline estimation, Requirement clarification and tracking, onboard new people, …etc.

I don't remember how many times I need to check a document to remember what exactly was the requirements. But believe me you will need it a lot when you handle many threads at the same time or even when you will check a new task.

When I should write a document?

Document is something people will need in future -even for archive purpose-, I haven't any case when people shouldn't write document, the correct question for me is: how long time people should spend writing document ?

There is one rule I follow, and I believe it's making much sense, the more important a document is the more time you should spend on it. For example, an architecture/business document that will affect many people for months or even years can take days or even weeks. An investigation document for minor incident with no really lost should take few hours, a recap for a call with external party should take few minutes and so on.

What should a good document looks like?

Here is my thought about a good document, Some apply on all document and another apply only on long document:

  • A good Title isn't enough: it's nice to have a good descriptive title, but no matter how the title is good, it Must have an introduction section defining what is the purpose of this document. One of the most annoying things for any reader is going through all the document to discover what is the goal of this document.
  • Assume People with no context: Never assume readers will have any unmentioned context, something like the current case and the motivation for this document isn't something readers will know. Remember people with context can skip this section, but people with no context can't imagine it, for example a writer can mention a number assuming the reader will have context about if this number is good or bad.
  • Define Who will read this document: defining who is the audience for this document is very important, different rules will need different way of abstraction.
  • Separate between What is critical, and what is less significant: normally any document has core critical sections and another that less essential or optional, can the reader easily catch the critical sections? Separation can be done using text formatting, orders and even separate the extra content in separate files.
  • Support the Claims/POV with facts: remember this is a technical document, something like saying X is better than Y requires an approval or justification, the document should include facts or state facts supporting the writer's POV, anything else should be avoided.
  • No-meeting mindset: one of the most common root cause for bad document is the writer always rely on the discussion meeting or someone will contact him/her if he/she does not get it. There are things not mentioned in the document but regularly have been discussed in the meeting, for me if not mentioned in the document then it should not bring to the table in the meeting. When you write a document, you should have a mindset that no one should contact you regarding this document, else your document is missing important info that should be included.
  • A Second eye is always better: there are difference between the document that is reviewed by someone and the document that does not. If the document is long enough, it's a good idea to ask someone else to review it for you before publishing it to the team/company.
  • Use domain/business standards: it's always better to use wide-usage knowledge that anyone in the field or business know instead of re-create the cycle again, something like Vocabulary, template, charts, …etc.
  • Learn from people mistakes: when you have a document that everyone mentioned it's a great document, it's highly recommended following the document style. On the other hand, when people mentioned some documents is bad, hard to understand, or misleading try to understand why people think that about it and try to avoid these mistakes.
     

Tuesday, 2 January 2024

The Art of Effective Business Communication

In today's digital age, businesses have access to a multitude of communication channels. But with so many options, reaching and engaging existing users can feel like navigating a crowded marketplace. This article delves into the power of strategic marketing communication for fostering loyalty, driving up engagement, and ultimately boosting your bottom line.

Communication in this age can mean many things include notification, emails, SMS, calls, even social media posts, home-visit, or meeting. Business communication mainly focuses on the communication when the sender is the business and the destination is the user of this business.

There are mainly 2 types of business communication:

  • Operation Communication: it's expected communication, like notify the client about the order state, renewal date, … etc.
  • Marketing Communication: it's business related reasons, for example to notify the user about new items added to the business.

While Operation Communication is straight forward, Marketing Communication on the another hand harder and need more resource.

Why Marketing Communication?

There are Major reasons about why we need Marketing communication, but the common thing between them is they are all related to one fact: until this moment, it's easier to reach the existing users than trying to find new users. There are many machine learning model who collect a lot of data about the user to make reaching to new users became much easier but until now the Click and conversion rate for most of these methods not efficient. So how we can expand our service using the user who already pay for this service?

  • Cross-selling: users who pay for one service can pay for other service.
  • Increase Total amount or frequency: users can pay with higher frequency/amount/tier.
  • Increase Retention Rate: users who paid then stop.
  • Increase Conversion Rate: users who signed up but never pay any service. 
  • User Based Marketing: users refer the application.

Communication Plan

Conversion Metrics, How Sender considers the communication as success?

The Most important thing to do is to answer the questions: How can we consider this communication as success or fail? How to measure the target metrics? What can be happened as side effect ? 

There are always metrics that business want to improve, like total transaction amount, but there are many metrics that effect this metric. And most time Sender can't hit all sub metrics in the same time, knowing that sender needs to only target smaller metrics to maximize the result in most times.

General Target likes 'payment increased' is something not very practical, Sender need to define some logical Target based on Market and historical data, and this target by its default should be hard but achievable at the same time. Additionally, when sender should measure it is also essential, should business expect the result after a couple of hours or a couple of weeks?

Lastly, business need to be aware of the side metrics, this helps in understanding if the communication has good or bad effect. For example, if communication Z improved the retention rate of users who paid for service X. but there is a side effect that the retention rate for service Y dropped. Should business consider this type of communication as success or fail?

Know The Customers

The key word in successful communication is to know all possible info about the customer, can business have good idea about the user age distribution, gender, finance state, device type, active hours, time zone, ... etc. This information will determine later many things, include how frequent business should communicate with different users, when, which language and channel it should use, ... etc. 

The more info the business have, the easier the communication will be, something like when it should communicate, the message's words, and the way of the communication, a lot of these question's answers are variable of who is the user.

Business who hasn't this information will be pushed to the corner and will just use generic approach that will never maximize their matrices.

Communication Bandwidth 

Any type of customer has a limited bandwidth, understanding the user base make The communication much more effective. For example, different ages have different resource include time, money, … etc.

No-Cost Communication Mystery  

There are many Sender that calculate the Cost of the communication by only its direct cost. So for example something like notification, emails, (and SMS for Telecom company), looks like 0 cost channel they can send as much as they can without any budget constrains.

While this in the first sight looks true, the hidden truth is the cost of the communication should never be measured only by its direct cost, But also with its direct impact. Something like keeping sending unrelated communication to the users make the future cost increase. User who used to receive many Communications will not give them attention. Actually in some time it will block this type of communication, for example user will block the notification, email, or SMS or ignore it, Spam system will filter more of these communications. Finally, the needs of Communication channel with much higher cost will be needed for larger group while it was required for only small group in the normal business.

Who should receive this communication?

Determine who to communicate, how, and when isn't an easy job at all, actually anyone can see companies with large data analysis team to answer this. Even with a lot of data, the Sender will not be 100% sure about the best way to reach the user to maximize the data.

The smaller groups make the communication much effect in most cases. Building data lake that include users profiles and their action history, training advanced models to group slimier groups, identify these groups and find Common things between them, defining the best way to communicate with them, …etc.

All this work need a lot of time and effort but in most cases, it really worth it, it gives the users the feeling this communication came from close person to him which make it harder to ignore.

A/B testing

Defining How to validate the number is also important, someone who claims that after sending notification to user, X of them tends to convert, while some say X is good or bad. My most significant question here is X compared to what? If for example People tends to convert by Y without notification and Y>X no matter how X is high it still has bad impact, and this type of communication should be eliminated.

A/B testing will answer this question, Sender will divide the target group to 2 random different groups, and only send to one group of them. Why? Because the Sender needs to avoid the external dependency as much as possible, sometimes external factors (for example a holiday, long term growth, or pricing changes) affect the whole trend much more than the communication. To make sure metrics will only measure the effect of that communication Sender needs to have 2 similar groups, comparing the metrics in both groups give the business much more Insight.

How to divide the main group to 2 similar groups? Simply by assigning people randomly from the main group to one of the 2 groups. Sender shouldn't and should never try to divide the group based on any common things.

Should the size of the first group equal the size of the last ? No, the most simple way is 50-50%, but there are many other ways like 80-20%, it mainly depends on how confidence sender is sure about the experiment, and both groups must have large group of user to ignore random error.

Should we only divide it into 2 groups? No Sender can divide it into more groups, but more groups means the experiment became more complex. Also, you need to be aware of 2 things: 

  1. Control group is mandatory : sender must have a baseline to compare with
  2. Each group size can't be very small -else random errors will give you wrong result and Sender will never be sure about these result (degree of confidence).

Should we run the test more than once? It's generally good practice to run the experiment more than once, time and events can affect the results, but make sure to adjust the sizes of the groups based on the recent results.

The results of one study -mentioned in references - proved that push notifications sent after A/B testing have a 10% higher click-through rate than those without A/B testing.

Why Students Should Think Twice Before Overusing AI Tools in College

In recent years, I’ve noticed a growing trend: many students and fresh graduates are heavily relying on AI tools during their college years....