Rubber Duck Debugging with AI: The Modern Approach
“Rubber duck debugging” is an old technique: you explain your bug to an inanimate object (a rubber duck), and in the process of explaining, you discover the problem yourself. Today’s AI assistants make this technique more powerful. Instead of talking to a duck, you can ask an LLM to help you think through the problem, suggest hypotheses, and generate fixes. You get instant, intelligent feedback that guides you toward solutions faster.
Formatting Bug Reports for LLMs
The quality of debugging help depends on how well you communicate the problem. An LLM can’t guess what’s wrong if you just say “my code is broken.” Instead, structure your bug report with four key pieces of information:
- The stack trace or error message — Show the exact error, including file names and line numbers
- The relevant code — Paste the code that’s failing or that you suspect is the culprit
- What you expected to happen — Describe the intended behavior
- What you’ve already tried — List fixes or debugging steps you’ve attempted
Here’s an example:
“I’m getting a NullPointerException on line 42 of UserRepository.kt. Here’s the stack trace: [paste trace]. The code is [paste function]. I expected it to return a User object, but instead it crashes when trying to access user.id. I’ve already tried adding null checks in the ViewModel, but that doesn’t solve the root cause.”
With this information, an LLM can quickly identify patterns and suggest fixes. It can see if the NPE is caused by an API returning null, a Hilt injection failure, or a logic error in your code.
The “Explain This Error” Pattern
When you get a cryptic error message, ask the LLM to explain it in plain English. This often reveals the real cause hiding behind technical jargon.
“I’m getting this error: ‘IllegalStateException: No matching arguments found in savedInstanceState for argument named user of type User’. What does this mean, and why might it happen?”
The LLM will explain that your Fragment is trying to retrieve a User object from saved state, but that object wasn’t saved during the previous state save. This points you toward solutions: either save the User in onSaveInstanceState, pass it through arguments, or use a ViewModel to survive configuration changes.
This pattern is especially useful for Android-specific errors, which can be opaque to developers who don’t memorize framework behavior. Error messages like “The Coroutine scheduler was disposed” or “MissingBackendException” make sense when an AI explains them.
Bisecting Issues with AI
When you’re not sure which part of your code is broken, ask the LLM to help you bisect the problem. Provide a high-level flow of your code and ask the LLM to suggest where the bug is most likely.
“Here’s my data flow: API call → Retrofit response → Repository parses JSON → ViewModel updates LiveData → UI observes and displays. The UI shows the wrong data. Can you help me identify where the bug probably is, and what questions I should ask to narrow it down?”
The LLM will suggest: “Check if Retrofit is parsing the JSON correctly. Print the raw response. Check if the Repository is transforming the data incorrectly. Check if the ViewModel’s LiveData value is what you expect.” You then test these hypotheses systematically, cutting down debugging time from hours to minutes.
Real Example: Compose Recomposition Bug
Suppose your Jetpack Compose UI is recomposing too often, causing janky animations. Here’s how to debug with AI:
“I have a Compose screen that animates a list of items. Every time a single item is clicked, the entire list recomposes, causing a visible stutter. Here’s my code: [paste Composable function]. I’ve already tried wrapping the list item in a Box, but it didn’t help. Why is the entire list recomposing when only one item changed?”
The LLM will likely spot that your list state is defined at the wrong scope, or that you’re missing a key parameter, or that your lambda is recreating on every recomposition. It will suggest specific fixes like using remember to memoize values or adding a key parameter to your list items.
Real Example: Coroutine Cancellation Bug
Coroutines are powerful but tricky. Here’s a common bug:
“My ViewModel launches a coroutine to fetch data. When the user navigates away, the Fragment is destroyed, but the coroutine keeps running. I see “Job was cancelled” errors in the logs. How do I make sure the coroutine is cancelled when the ViewModel is destroyed?”
The LLM will explain that viewModelScope automatically cancels coroutines when the ViewModel is cleared, but only if you use it. It will show you the difference between viewModelScope.launch (correct) and GlobalScope.launch (incorrect). This immediate clarity saves you from subtle memory leaks.
Real Example: Hilt Injection Failure
Hilt errors can be confusing. Here’s how to get clarity:
“I’m getting ‘MissingBackendException: Unable to create an instance of UserRepository’. But I’ve annotated it with @HiltViewModel and provided all dependencies. Here’s my code: [paste ViewModel, Repository, and Hilt module]. What’s missing?”
The LLM will scan the code and identify that you’re missing a binding for one of the Repository’s dependencies, or that you’re using @Inject in a class that Hilt doesn’t know how to instantiate (e.g., a class that’s not a ViewModel, Fragment, or Activity). It will suggest the exact fix.
Hypothesis Generation: Ranked by Likelihood
For truly mysterious bugs, ask the LLM to generate hypotheses ranked by likelihood. This guides your debugging in order of probability, saving time on unlikely theories.
“My app crashes on startup, but only on Android 10 devices. Here’s the stack trace: [paste trace]. Here’s the code where it crashes: [paste code]. Generate a list of hypotheses ranked by likelihood, and suggest how to test each one.”
The LLM will respond with something like:
- Most likely: Permissions issue on Android 10. You’re trying to access files without WRITE_EXTERNAL_STORAGE. Test by checking logcat for permission denials.
- Likely: You’re using a deprecated API that was removed in Android 10. Test by checking the Android 10 breaking changes documentation.
- Possible: A third-party library is incompatible with Android 10. Test by checking library release notes and issues.
- Unlikely: A race condition in your initialization code. Test by adding explicit synchronization.
This ranking saves you from chasing dead ends. You test the most likely hypotheses first and find the bug quickly.
Advanced: Multi-Step Debugging Prompts
For complex bugs, use a series of focused prompts rather than one giant prompt. Each prompt can build on the previous answer.
Prompt 1: “I have a data binding bug in my Activity. Here’s the error and the relevant code. What are the three most common causes of this error?”
Prompt 2: [after checking the first hypothesis] “I checked the first cause and it’s not that. Here’s what I found: [paste diagnostic info]. Does that change your diagnosis?”
Prompt 3: “Your second hypothesis about variable naming seems right. Here’s the code I’m checking: [paste]. Is this the issue?”
This iterative approach is more effective than dumping all information at once. It also keeps the LLM context focused and reduces hallucinations.
Debugging Coroutine Issues with AI
Coroutines are particularly suited to AI-assisted debugging because their behavior can be subtle. When you’re stuck, provide context about scope, error handling, and timing:
“I’m using viewModelScope to fetch data in a ViewModel. The coroutine fails (network error), and I want to retry. Here’s my code: [paste code]. The retry isn’t working — the coroutine seems to complete even when it fails. What’s wrong?”
The LLM will identify whether you’re missing a try-catch, using the wrong error handling pattern, or not properly observing the result. It can suggest using runCatching for functional error handling or restructuring with retry operators.
For deeper learning on error handling patterns in Kotlin, read our article on runCatching and Result for Functional Error Handling.
Leveraging Testing to Debug
Sometimes the best way to debug is to write a test. Ask the LLM to help you write a unit test that isolates the bug:
“I suspect my UserRepository is returning null when it shouldn’t. Write a unit test that verifies the correct behavior, using Mockito to mock the API response.”
The test will either confirm the bug (the test fails) or prove you’re not triggering it the way you think. This isolates the problem and often reveals the root cause. Learn more about testing strategies in our post on Testing and Debugging with Claude Code.
Best Practices for AI-Assisted Debugging
Keep these principles in mind:
- Be specific: “My code doesn’t work” is too vague. “I’m getting a NullPointerException on line 42” is actionable
- Show your work: Tell the LLM what you’ve already tried so it doesn’t suggest the same fixes
- Include stack traces: Full stack traces give the LLM much more information than partial error messages
- Verify suggestions: Don’t blindly apply every fix the LLM suggests. Understand why it’s proposing the fix and test it
- Iterate: If the first suggestion doesn’t work, ask follow-up questions rather than giving up
Conclusion
AI-assisted debugging is like rubber duck debugging on steroids. By formatting your bug reports clearly, asking the LLM to explain cryptic errors, bisecting problems systematically, and generating ranked hypotheses, you can find and fix bugs much faster. Use these patterns for Compose issues, coroutine bugs, Hilt injection problems, and anything else that has you stumped. The key is giving the LLM enough context to help you think through the problem methodically.
This post was written by a human with the help of Claude, an AI assistant by Anthropic.
