On this page
Lesson 4 of 6
Strengths and limits
What you'll learn
- Name 3 tasks where AI shines today
- Name 3 tasks where AI fails predictably
- Set up a personal trust filter for AI output
AI is not uniformly good or uniformly bad. It is incredibly strong in certain directions and surprisingly weak in others. Knowing the map -- where AI shines and where it stumbles -- is the single most important skill for using it well. This lesson draws that map.
Where AI shines
These are the tasks where current AI models are genuinely excellent. When you use AI for these, you will often be impressed.
Drafting and rewriting text. Need a first draft of an email, a social media post, a product description, or a cover letter? AI is remarkably good at this. It does not suffer from writer's block, and it can produce a solid starting point in seconds. You will almost always need to edit the result to match your voice, but going from a draft to a final version is much faster than starting from a blank page.
Summarizing long content. Hand AI a long article, a research paper, a meeting transcript, or a dense email thread, and ask for a summary. This is one of the highest-value uses of AI today. It can compress ten pages into ten bullet points and usually captures the key ideas accurately.
Brainstorming and generating options. When you are stuck, AI is an excellent thinking partner. "Give me 10 possible names for a pet grooming business" or "What are 5 different angles I could take for this presentation?" AI generates options fast, and even if most of them are mediocre, the few good ones can spark your own creativity.
Translation and language help. AI handles translation between major languages impressively well, especially for informal and business content. It is also great at adjusting tone: making something more formal, more casual, simpler, or more persuasive.
Explaining complex topics simply. "Explain how inflation works like I am 12 years old." This is where AI's pattern-matching ability truly shines. It has seen millions of explanations at every level of complexity and can adapt its language to your needs.
Organizing and structuring information. Give AI a messy list of notes and ask it to organize them into categories, create an outline, or build a table. It is fast and surprisingly good at finding structure in chaos.
Where AI fails
These are the tasks where AI will confidently produce wrong, misleading, or useless output. Knowing these failure modes will save you from costly mistakes.
Precise math and calculations. This might surprise you, given that computers are "good at math," but language models are not calculators. They predict text patterns, and while they can often get simple math right, they regularly fail at multi-step calculations, large numbers, or anything requiring precision. If the answer matters, use a calculator.
Real-time or very recent information. AI models have a knowledge cutoff -- a date when their training data ends. They do not browse the internet in real-time (unless they have a specific tool for that). If you ask about yesterday's news, last week's stock price, or a product launched this month, the model might guess, make something up, or tell you it does not know. Always check time-sensitive facts independently.
Personal opinions and subjective judgments. "Should I take this job?" "Is this person trustworthy?" "Which city should I move to?" AI has no personal experience, no values, and no skin in the game. It can lay out pros and cons, but the decision is yours. Treat AI as a research assistant, not an advisor.
Citing sources accurately. Ask AI to provide references or URLs and you will often get citations that look real but do not exist. The model is generating text that looks like a citation, not looking up an actual source. This is one of the most common traps. Never trust an AI-provided URL or reference without clicking it yourself.
Sensitive, high-stakes decisions. Medical diagnoses, legal advice, financial decisions, safety assessments -- AI can provide general information in these areas, but it should never be your only source. The consequences of an error are too high, and the model has no way to account for your specific situation.
Hallucinations: the biggest trap
The most important failure mode has a name: hallucination. This is when the AI generates information that sounds confident and specific but is simply wrong. A hallucinated answer does not come with a warning label. It reads exactly like a correct answer.
Why does this happen? Remember from Lesson 2 -- the model is a pattern machine. It does not look up facts in a database. It generates text that statistically "fits" the pattern of your question. Sometimes the most plausible-sounding text is factually wrong. The model does not know the difference because it does not "know" anything in the way you do. It produces patterns.
Hallucinations are most dangerous when you are asking about a topic you do not know well. If someone gives you wrong information about your hometown, you catch it instantly. If they give you wrong information about quantum physics and you have never studied it, it sounds perfectly reasonable.
Your personal trust filter
Here is a simple framework for deciding when to trust AI output and when to verify:
Trust freely: Creative content (brainstorming, drafts, rewrites), formatting and structuring, explanations of concepts you will review, and any output you plan to edit before using.
Verify before using: Any specific facts, dates, numbers, or statistics. Any names of real people, organizations, or places. Any URLs, references, or citations. Any claims about recent events.
Do not rely on AI alone: Medical, legal, or financial decisions. Safety-critical assessments. Anything where being wrong has serious consequences.
This filter is simple, but if you internalize it, you will avoid the vast majority of AI mistakes. The rule of thumb: AI is a brilliant first drafter and a terrible fact-checker. Use it accordingly.
What is next
Now that you know what AI can and cannot do, it is time to put everything together. In the next lesson, you will complete your first real task with AI -- end to end, step by step. Head to Your first useful task when you are ready.
نقاط القوّة والحدود
الذكاء الاصطناعي ليس جيّدًا في كلّ شيء ولا سيّئًا في كلّ شيء -- إنّه قويّ جدًّا في اتّجاهات معيّنة وضعيف بشكل مفاجئ في أخرى. يتألّق في صياغة النّصوص وإعادة كتابتها، تلخيص المحتوى الطّويل، العصف الذّهني، التّرجمة، شرح المفاهيم المعقّدة ببساطة، وتنظيم المعلومات الفوضويّة. لكنّه يفشل في الحسابات الدّقيقة والمعلومات الآنيّة والآراء الشّخصيّة والاستشهاد بمصادر حقيقيّة والقرارات الحسّاسة عالية المخاطر.
أخطر فخّ هو "الهلوسة": حين يولّد الذكاء معلومات تبدو واثقة ومحدّدة لكنّها خاطئة تمامًا، بدون أيّ تحذير. يحدث هذا لأنّ النّموذج آلة أنماط لا قاعدة بيانات حقائق -- ينتج النّصّ الأكثر احتمالاً لا الأصحّ بالضّرورة. مرشّح الثّقة الشّخصي بسيط: ثق بحرّيّة في المحتوى الإبداعي والمسوّدات، تحقّق قبل الاستعمال من الحقائق والتّواريخ والأرقام والرّوابط، ولا تعتمد على الذكاء وحده في القرارات الطبّيّة أو القانونيّة أو الماليّة. القاعدة: الذكاء كاتب مسوّدات ممتاز ومدقّق حقائق سيّء -- استعمله وفقًا لذلك.
في الدّرس التّالي ستنجز أوّل مهمّة حقيقيّة بالذكاء من البداية إلى النّهاية. توجّه إلى أوّل مهمّة نافعة لك حين تكون جاهزًا.
Try it yourself
Ask your AI assistant a factual question about a topic you know very well. Then ask one about a topic you know nothing about. Compare how confident the AI sounds in both cases. Did it get anything wrong in the topic you know? What does that tell you about the topic you do not know?
Reflect
Think of a decision you made recently based on information you found online. Would you have trusted an AI's answer for that decision? Why or why not?