LLM Prompts for Code Refactoring: A Practical Guide

LLM Prompts for Code Refactoring: Structuring Your Requests

When you’re facing a large refactoring task in your Android codebase, asking an LLM the right way can save you hours. Unlike simple coding questions, refactoring requires your AI assistant to understand the architectural context, the business logic you’re preserving, and the specific constraints of your project. In this post, you’ll learn proven patterns for structuring LLM prompts that yield high-quality refactored code you can trust.

The Before/After Pattern

The most effective way to guide an LLM through refactoring is the before/after pattern. Rather than asking “refactor this code to be better,” you show the LLM what you want to change and why. Start by presenting the current implementation, then clearly state your goal. For example: “Here’s my ViewModel using callbacks for network requests. I want to migrate this to use Kotlin coroutines instead.” This clarity dramatically improves the LLM’s output because it understands the direction of change and the trade-offs involved.

Refactoring with AI
Refactoring with AI

When you use this pattern, include:

  • The current code — paste the actual implementation you’re refactoring
  • The target approach — describe what you want to achieve (e.g., “coroutines,” “reactive streams,” “clean architecture”)
  • Constraints — mention any libraries you must keep, Android API levels you support, or existing patterns you follow

Here’s a real example. If you’re extracting business logic from a ViewModel into a UseCase, you might prompt:

“I have a ViewModel that handles user authentication. It makes API calls using Retrofit, manages loading state with LiveData, and performs input validation. I want to extract the authentication logic into a separate UseCase class following SOLID principles. Show me how to structure this so the ViewModel only handles UI state, and the UseCase handles all the business logic. I’m using Hilt for dependency injection.”

Providing Architectural Context

Large refactors fail when the LLM doesn’t understand your app’s architecture. Before asking for changes, give your AI assistant a mental model of how your modules fit together. Describe your layering — do you have separate data, domain, and presentation layers? Are you using multi-module architecture? Does your project follow a specific pattern like MVVM, MVI, or clean architecture?

You don’t need to paste your entire codebase. Instead, provide a high-level overview. For instance: “Our app uses a feature-based multi-module structure. Each feature has its own :feature-{name}:presentation and :feature-{name}:data modules. The domain layer has shared business logic. We use Hilt for DI and Kotlin coroutines for async work.” This context prevents the LLM from suggesting refactors that violate your module boundaries or introduce circular dependencies.

If you’re following established patterns, reference them: “We use the UseCase pattern as described in the Android architecture guides.” The LLM can then apply that established pattern to your specific code.

What I’ve tried and worked well is to ask the model to create a .md file plan with the current structure for a module/class/data flow etc. After it finishes I review and make sure everything is right in there and iterate if it is the case. After I am happy with the plan I can say I have a proper context for the LLM with the current setup.

Chunking Large Refactors Into Sequential Prompts

When you’re refactoring something big — like rewriting a entire feature’s data layer or restructuring how your app handles authentication — don’t ask the LLM to do it all at once. Break the refactor into logical steps, and handle them in sequence. This gives you checkpoints to review the work, catch issues early, and adjust direction if needed.

For a large migration, your sequence might look like:

  1. Step 1: Refactor the domain layer (business logic and models)
  2. Step 2: Update the data layer to use the new domain models
  3. Step 3: Update the presentation layer to consume the new data layer
  4. Step 4: Remove deprecated code and run tests

After each step, review the generated code carefully. Ask the LLM clarifying questions if something doesn’t match your codebase’s style or if you see potential issues. Then move to the next step armed with that feedback. This iterative approach also helps you catch architectural issues before they compound through the entire codebase.

Real Example: Callbacks to Coroutines Migration

Let’s walk through a concrete refactoring. Say you have a UserRepository that uses Retrofit callbacks:

class UserRepository(private val api: UserApi) {
    fun fetchUser(id: String, callback: (Result<User>) -> Unit) {
        api.getUser(id).enqueue(object : Callback<User> {
            override fun onResponse(call: Call<User>, response: Response<User>) {
                if (response.isSuccessful) {
                    callback(Result.success(response.body()!!))
                } else {
                    callback(Result.failure(HttpException(response)))
                }
            }
            override fun onFailure(call: Call<User>, t: Throwable) {
                callback(Result.failure(t))
            }
        })
    }
}

Your prompt to the LLM:

“I’m migrating our UserRepository from Retrofit callbacks to Kotlin coroutines. Here’s the current implementation [paste code]. I want to: (1) use suspend functions instead of callbacks, (2) rely on Retrofit’s built-in coroutine support, (3) let exceptions propagate naturally so the caller can handle them with try-catch or runCatching. Show me the refactored version.”

The LLM will likely suggest a coroutine-based version that’s cleaner and more testable. You can then ask follow-up questions: “How should I update the ViewModel that calls this?” or “What changes do I need in my Hilt setup?”

Extracting a UseCase: Another Real Example

If you have a ViewModel doing too much, you can use an LLM to help extract a UseCase. Suppose your LoginViewModel mixes validation, API calls, and UI state:

“I have a LoginViewModel that validates user input, calls an API to authenticate, and manages loading/error states. I want to extract the authentication logic into a separate UseCase. The UseCase should: (1) take email and password as inputs, (2) validate them before calling the API, (3) return a sealed class result (success or error). The ViewModel should call the UseCase and update UI state based on the result. Show me both the new UseCase and the refactored ViewModel.”

By being specific about responsibility boundaries, you guide the LLM to create a clean separation of concerns. Your ViewModel becomes a thin UI layer, and the UseCase becomes testable business logic. Learn more about structuring your development workflow in our guide on Claude Code for Android Development Setup.

Best Practices When Refactoring with LLMs

Use the plan mode whenever it is possible to setup a plan of action before actually asking the agent to execute the refactor. Always review the LLM’s output/plan carefully. Check for:

  • Consistency with your codebase — does the style match your project?
  • Correctness — does the logic preserve the original behavior?
  • Testing — are there unit tests you need to update?
  • Dependencies — did the refactor introduce any circular dependencies or unwanted coupling?

Don’t hesitate to iterate. If the first attempt doesn’t feel right, ask the LLM to revise it. For instance: “This looks good, but can you make the error handling use runCatching instead of try-catch?” or “The UseCase has too many parameters. Can you refactor it to use a sealed command class instead?”

To deepen your skills with AI-assisted development, check out our post on Claude Code Skills for Android: Automating Boilerplate, which covers more advanced prompting techniques.

Conclusion

Refactoring with LLMs is powerful when you structure your requests well. Use the before/after pattern, provide architectural context, and break large refactors into steps. Be specific about your constraints and goals, and always review the output. Over time, you’ll develop intuition for what prompts produce the best results for your codebase.


This post was written by a human with the help of Claude, an AI assistant by Anthropic.

Scroll to Top