Dylan AndersenDylan Andersen's Docs
Cursor + SalesforceAdvanced Cursor

Advanced Techniques

Multi-agent orchestration, hooks, subagents, and the patterns worth graduating to after the basics land

Once you're comfortable with Chat, Inline Edit, Tab, and the @ context primitives (all of which live in Cursor Fundamentals), there's a second layer of Cursor you can grow into. This page is that layer.

Advanced Techniques hero

Who this page is for

SEs who've already shipped a couple of Agentforce or LWC builds with Cursor and want to go faster. If you're still learning the keyboard shortcuts, start with Cursor Fundamentals.

Multi-agent orchestration

The useful version of "let the AI do it" is rarely one giant prompt. It's a small team of focused agents, each with a narrow scope and a clear handoff.

The pattern

  1. Open Agent mode. Give the lead agent a broad task: "build a customer service triage agent, test it, and document it."
  2. The lead agent spawns subagents (via the Task tool) that run in parallel. One might be scaffolding the agent metadata, another writing tests, another drafting the handoff doc.
  3. Each subagent has its own context window and returns a summary when done.
  4. The lead agent stitches the summaries together and presents the result.

This works because each subagent isn't fighting for context with the others. The scaffolding subagent doesn't see your test harness; the test subagent doesn't see your documentation drafts.

When to use it

  • The task has three or more genuinely independent pieces.
  • You can write a short, clear brief for each piece.
  • You'd rather get three rough drafts in parallel than one polished sequential output.

When to skip it

  • The pieces depend on each other's decisions. Subagents work best on independent work; sequential dependencies lose the parallel benefit.
  • The task is small enough for a single agent to just do.

Example brief

Act as a lead agent. Spawn three subagents in parallel:

1. Subagent A: generate the force-app metadata for a `Customer Service
   Triage` Agentforce Builder agent with topics for warranty, billing,
   and shipping. Use the sf-ai-agentforce skill.

2. Subagent B: generate 12 aiEvaluationDefinition tests covering the
   happy paths for each topic plus ambiguous cases. Use the
   sf-ai-agentforce-testing skill.

3. Subagent C: draft docs/handoff.md describing the agent, its topics,
   and how to extend it. Use the afd360-poc-docs-skill.

When all three return, consolidate their outputs and propose any edits
needed for the three pieces to line up.

Background agents

Background agents run in a sandboxed environment while you keep typing. Good use cases for SE work:

  • Running a long sf project deploy start --test-level RunLocalTests against a customer sandbox while you prep the next demo scene.
  • Pulling and analyzing a week's worth of session traces from Data 360.
  • Refactoring every LWC in the project to SLDS 2 while you write the handoff doc.

Start one from the chat panel's background agent button. Give it a self-contained brief. When it's done, it hands you a summary and a set of changes to review.

Not a fire-and-forget

Background agents are powerful, and that means they can do real damage. Before kicking one off against a customer org, confirm auto-run is scoped and the target alias is correct. See Security & Data Handling.

Hooks

Hooks run scripts at specific moments in the agent's lifecycle. They live in .cursor/hooks.json. Two shapes worth knowing:

  • Pre-edit hooks run before the agent writes a file. Useful for format-on-edit, linting, or guardrails that reject unsafe changes.
  • Post-response hooks run after the agent finishes a turn. Useful for logging, running a lightweight test, or triggering a deploy.

A tiny example that runs prettier on any file the agent edits:

{
  "hooks": [
    {
      "event": "afterFileEdit",
      "match": ["**/*.cls", "**/*.js", "**/*.html"],
      "command": "npx prettier --write \"${file}\""
    }
  ]
}

Hooks run invisibly. You write them once, commit them to the repo, and every agent that opens the project picks them up.

Reusing prompts as rules, rules as skills

A pattern from the Rules & AGENTS.md page that's worth internalizing for advanced work: prompts evolve into rules, and rules evolve into skills.

  • A sentence you keep typing becomes a rule.
  • A rule that sprouts examples and a rubric becomes a skill.
  • A skill that needs to call the real world gets wired to an MCP.

The advanced move is to delete rules when they stop earning their keep. Rule creep is real. Every six weeks, open .cursor/rules/ and ask which rules you've actually relied on. Archive the rest.

Custom MCP authoring

When an existing MCP doesn't do what you need, write one. An MCP server is just a stdio process that speaks the Model Context Protocol over JSON-RPC. A minimal one in Node is under a hundred lines.

When to build your own:

  • You have an internal tool with a stable CLI that the agent would benefit from.
  • You want to expose a narrow, safe slice of a system (for example, "query this read-only reporting database") without giving the agent broad access.
  • You're tired of asking the agent to shell out to a command when a proper tool description would do.

When not to:

  • An official or community MCP already does it. Check modelcontextprotocol.io first.
  • The tool is a one-off. Just let the agent use the CLI.

Start from the MCP TypeScript SDK template and expose two or three tools before expanding.

Multi-org MCP setups

For customer work, the standard advanced pattern is one Salesforce MCP per target org, each with a different name and a scoped toolset. The full write-up lives in Multi-Org Workflows. The short version:

{
  "mcpServers": {
    "Salesforce ACME UAT": {
      "command": "npx",
      "args": [
        "-y", "@salesforce/mcp@latest",
        "--orgs", "acme-uat",
        "--toolsets", "orgs,metadata,data"
      ]
    },
    "Salesforce ACME Prod (read-only)": {
      "command": "npx",
      "args": [
        "-y", "@salesforce/mcp@latest",
        "--orgs", "acme-prod",
        "--toolsets", "orgs,data"
      ]
    }
  }
}

The production server has no metadata and no users toolset, so the agent cannot accidentally deploy or provision against it.

Chaining operations intentionally

Cursor's agent handles multi-step work well, but the quality of the output depends on the quality of the brief. A useful shape for chained Salesforce work:

Plan first, then execute. For each step, state the command or edit, the
target org alias, and the expected verification. Stop and ask before
any destructive action.

Steps:

1. Retrieve the latest `AccountService.cls` from acme-uat.
2. Add a method `closeDormantAccounts()` with a matching test.
3. Run the test suite locally (scratch org) and report coverage.
4. If passing, deploy to acme-uat with `--dry-run`.
5. If the dry-run succeeds, wait for my confirmation, then deploy for real.
6. After deploy, retrieve the class back and diff to confirm.

That brief is long, but it's once. The agent turns it into five tool calls, reports back at each step, and waits for you at the one place it should.

What to do next

On this page