Language:English VersionChinese Version

Code migration is one of the most tedious tasks in software engineering. Upgrading from React class components to hooks. Moving a codebase from JavaScript to TypeScript. Migrating an API from REST to gRPC. These projects are well-defined, repetitive, and time-consuming — which makes them ideal candidates for AI assistance.

Over the past year, I have used AI tools to assist with four major migration projects. Two went well. One was a partial success. One was a disaster that cost more time than a manual migration would have. This article breaks down what worked, what did not, and how to evaluate whether AI-assisted migration makes sense for your project.

The Migration Landscape in 2026

AI-powered code migration falls into three categories:

  1. LLM-assisted migration: Using Claude, GPT-4, or similar models to transform code file by file, with human review. This is the most common approach.
  2. Specialized migration tools: Purpose-built tools like OpenRewrite (for Java), ts-morph (for TypeScript), and jscodeshift (for JavaScript) that use AST transformations.
  3. Hybrid approaches: Using AI to generate AST transformation rules, then applying those rules deterministically across the codebase.

The key insight I have learned: AI is excellent at understanding intent and generating initial transformations, but it struggles with consistency across a large codebase. The hybrid approach — using AI to write the transformation rules, then applying them mechanically — consistently produces the best results.

Case Study 1: JavaScript to TypeScript (Success)

The project: a 45,000-line Express.js API with 180 files, zero TypeScript. The goal was full TypeScript adoption with strict mode.

What Worked

I broke the migration into phases, using Claude to assist each one:

Phase 1: Infrastructure (manual, 2 hours)

// tsconfig.json - Start with loose settings, tighten later
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": false,           // Start permissive
    "allowJs": true,           // Coexist with JS files
    "outDir": "./dist",
    "rootDir": "./src",
    "esModuleInterop": true,
    "resolveJsonModule": true,
    "declaration": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist", "**/*.test.ts"]
}

Phase 2: Type extraction (AI-assisted, 4 hours)

I fed Claude the database schema, API routes, and example request/response payloads. It generated comprehensive type definitions:

// types/api.ts - Generated by AI, then reviewed and refined
export interface User {
  id: string;
  email: string;
  displayName: string;
  role: "admin" | "editor" | "viewer";
  createdAt: Date;
  lastLoginAt: Date | null;
  preferences: UserPreferences;
}

export interface UserPreferences {
  theme: "light" | "dark" | "system";
  emailNotifications: boolean;
  timezone: string;
}

export interface CreateUserRequest {
  email: string;
  displayName: string;
  role?: User["role"];  // Defaults to "viewer"
}

export interface PaginatedResponse<T> {
  data: T[];
  pagination: {
    page: number;
    pageSize: number;
    total: number;
    totalPages: number;
  };
}

The AI generated about 85% of the types correctly from the schema alone. The remaining 15% required manual refinement, mostly around nullable fields and union types that were implicit in the JavaScript code.

Phase 3: File-by-file conversion (AI-assisted, 12 hours)

This is where the approach mattered. Instead of asking the AI to convert each file independently, I provided it with a conversion prompt that included the type definitions and a style guide:

Convert this JavaScript file to TypeScript. Rules:
1. Use the types from types/api.ts (already provided)
2. Prefer explicit return types on all exported functions
3. Use unknown instead of any where the type is genuinely unknown
4. Preserve all existing comments
5. Do not change any business logic
6. Add TODO comments where the type cannot be determined from context

File to convert:
[paste file content]

The consistency of the prompt mattered enormously. Without rule #5, the AI would occasionally “improve” business logic, introducing subtle bugs. Without rule #6, it would use any to paper over type ambiguities instead of flagging them for human review.

Results

Metric Value
Total time 18 hours (estimated 60 hours manual)
Files converted 180
AI accuracy (no manual edits needed) 72%
Bugs introduced by AI 3 (caught in review)
TypeScript strict mode errors remaining 0

Case Study 2: React Class Components to Hooks (Partial Success)

The project: a 120-component React application. About 60 components used class syntax with lifecycle methods, refs, and complex state management.

What Worked

Simple components converted perfectly. An AI could turn this:

class UserCard extends React.Component {
  constructor(props) {
    super(props);
    this.state = { expanded: false };
  }

  toggleExpand = () => {
    this.setState(prev => ({ expanded: !prev.expanded }));
  }

  render() {
    return (
      <div className="user-card">
        <h3>{this.props.user.name}</h3>
        {this.state.expanded && <UserDetails user={this.props.user} />}
        <button onClick={this.toggleExpand}>
          {this.state.expanded ? "Collapse" : "Expand"}
        </button>
      </div>
    );
  }
}

Into this, correctly, every time:

function UserCard({ user }: { user: User }) {
  const [expanded, setExpanded] = useState(false);

  const toggleExpand = useCallback(() => {
    setExpanded(prev => !prev);
  }, []);

  return (
    <div className="user-card">
      <h3>{user.name}</h3>
      {expanded && <UserDetails user={user} />}
      <button onClick={toggleExpand}>
        {expanded ? "Collapse" : "Expand"}
      </button>
    </div>
  );
}

What Failed

Complex components with componentDidMount, componentDidUpdate, and componentWillUnmount interacting with each other were problematic. The AI would generate useEffect hooks that looked correct but had subtle dependency issues:

// AI-generated — looks right, but has a stale closure bug
useEffect(() => {
  const interval = setInterval(() => {
    if (isActive) {  // This captures the initial value of isActive
      fetchNewData();
    }
  }, 5000);
  return () => clearInterval(interval);
}, []);  // Missing isActive in dependency array

These bugs are insidious because they pass superficial review. The component renders correctly in most test scenarios. The stale closure only manifests when the user toggles isActive after the initial render, which might not be covered by existing tests.

The Fix: AST-Based Rules

For the complex components, I switched to writing jscodeshift transforms — and used AI to help write the transforms themselves:

// jscodeshift transform: convert componentDidMount to useEffect
export default function transformer(file, api) {
  const j = api.jscodeshift;
  const root = j(file.source);

  root.find(j.MethodDefinition, {
    key: { name: "componentDidMount" }
  }).forEach(path => {
    const body = path.value.value.body;
    // Generate useEffect with empty dependency array
    const useEffect = j.expressionStatement(
      j.callExpression(j.identifier("useEffect"), [
        j.arrowFunctionExpression([], body),
        j.arrayExpression([])  // Empty deps = mount only
      ])
    );
    // Replace in the function component body
    j(path).replaceWith(useEffect);
  });

  return root.toSource();
}

The transform is deterministic — it applies the same transformation to every matching component. AI helped me write the transform, but the execution was mechanical. This eliminated the consistency problem.

Case Study 3: REST to gRPC (Disaster)

The project: migrating a 40-endpoint REST API to gRPC for internal service communication while maintaining a REST gateway for external clients.

Why It Failed

The migration required changes at multiple layers simultaneously: Protocol Buffer definitions, server implementation, client code, and the REST gateway. The AI could handle each layer in isolation but could not maintain consistency across layers.

Specific problems:

  • The AI generated .proto files that looked correct but used inconsistent naming conventions between services
  • Generated server implementations did not match the .proto definitions due to context window limitations
  • Error mapping between gRPC status codes and HTTP status codes was inconsistent
  • Streaming endpoints were generated with incorrect flow control

After three days of AI-assisted migration and debugging, I scrapped the AI-generated code and did a manual migration in five days. The manual approach was slower but produced correct, consistent code.

Lesson Learned

AI-assisted migration works when the transformation is local — converting one file, one component, or one function at a time with clear input and output types. It fails when the transformation is systemic — requiring coordinated changes across multiple files that must maintain mutual consistency.

A Framework for Evaluating AI Migration

Based on these experiences, here is a decision framework:

Factor AI Works Well AI Struggles
Scope File-by-file transforms Cross-file coordination
Validation Compiler catches errors Runtime-only validation
Pattern Mechanical, repetitive Requires domain knowledge
Types Well-defined input/output Implicit contracts
Testing Existing tests validate No existing test coverage

The ideal AI migration candidate has three properties:

  1. Each file can be converted independently
  2. A type checker or compiler validates the output
  3. Existing tests confirm behavioral correctness

Tools Worth Using in 2026

For JavaScript/TypeScript migrations:

  • ts-morph: Programmatic TypeScript AST manipulation. Excellent for adding types, renaming, restructuring.
  • jscodeshift: Facebooks codemod toolkit. Battle-tested on millions of lines of code.
  • Claude/GPT-4 with structured output: For generating the transformation rules themselves.

For Java migrations:

  • OpenRewrite: The gold standard. Handles Spring Boot upgrades, Java version migrations, and dependency updates. Its recipe system is deterministic and testable.
  • Error Prone: Googles static analysis tool includes auto-fix suggestions that serve as micro-migrations.

For multi-language projects:

  • Semgrep: Pattern-based code transformation that works across languages. Use it for security-related migrations (fixing vulnerable patterns) and API changes.
  • ast-grep: A newer tool that combines AST matching with a simple pattern syntax. Faster than Semgrep for large codebases.

The Hybrid Workflow

The approach that consistently works best:

  1. Use AI to analyze the codebase and identify all instances that need migration. AI is excellent at categorization.
  2. Use AI to generate transformation rules (jscodeshift transforms, OpenRewrite recipes, Semgrep rules). Have it write the rule, not apply the rule.
  3. Apply the rules mechanically across the entire codebase. This ensures consistency.
  4. Use AI for the long tail — the 10-15% of cases that do not fit the mechanical rules. These need individual attention, and AI can draft the initial conversion for human review.
  5. Run existing tests. If tests fail, debug manually. AI-generated fixes for test failures tend to mask bugs rather than fix them.

This workflow leverages AIs strengths (understanding patterns, generating code) while avoiding its weaknesses (maintaining consistency across files, handling complex interdependencies).

What Is Coming Next

The gap between “AI can convert this file” and “AI can migrate this system” is closing. Longer context windows help. Tool-use capabilities (where the AI can run the compiler and iterate on errors) help more. But for now, the hybrid approach — AI-generated rules, mechanical application — remains the most reliable path for production migrations.

If you are planning a migration, start with a 10-file pilot. Measure the AI accuracy rate. If it is above 80% with minimal review, scale up. If it is below 60%, invest in writing deterministic transformation rules instead. The worst outcome is an inconsistent codebase where half the files were migrated by AI with slightly different patterns.

By

Leave a Reply

Your email address will not be published. Required fields are marked *