AboutProjects
Notes

How I Use AI Without Losing My Thinking

productivityaiengineeringworkflow

AI is most useful to me when it accelerates structure, not when it replaces judgment.

That distinction matters more than people think.

Many engineers do not fail with AI because the tool is bad. They fail because they let it enter the loop too early, too broadly, or without enough evidence.

The result is familiar:

The fix is not "never use AI." The fix is to give it a better role.

My default rule: hypothesis first, AI second

Before I use AI on a bug or confusing behavior, I try to capture three things:

Only then do I ask the model for help.

This changes the quality of the interaction immediately.

AI becomes less of an oracle and more of a collaborator responding to a real situation.

Where AI helps most

In day-to-day engineering, I get the highest ROI from AI in a few areas.

1. Test skeletons and checklists

AI is very good at turning a known situation into a structured starting point.

That includes:

The key phrase here is known situation. I do not want the model inventing the problem definition for me.

2. Refactor suggestions after the architecture is clear

Once I know the shape of the solution, AI is useful for:

That is different from asking it to design the system from scratch while I remain mentally passive.

3. Summarization and compression

AI is genuinely useful when I have too much raw material and need a clean summary of:

This is especially helpful for release notes, debugging notes, and post-block summaries.

Where I am more careful

There are areas where I want AI to stay firmly in an assisting role.

Examples include:

The rule is simple:

if the cost of misunderstanding is high, AI can suggest but should not silently lead.

What creates noise

A few behaviors consistently reduce value.

Prompting before collecting anchors

If I ask too early, the answer often becomes a substitute for investigation.

That feels productive, but it weakens the actual debugging process.

Broad prompts with weak boundaries

When the scope is fuzzy, AI often responds with broad possibility trees that sound helpful but increase cognitive load.

Treating output as truth instead of draft

Even strong models are best treated as draft generators, thought partners, and structure helpers.

They are not a substitute for evidence.

The role I want AI to play

I want AI to help me do a few things faster:

I do not want it to become the place where my first real thought happens every time.

That is the line I try to protect.

A simple operating loop

When using AI on engineering work, this loop has worked well for me:

  1. Define the problem in my own words
  2. Capture evidence
  3. Write a hypothesis
  4. Ask AI for a bounded kind of help
  5. Verify against the real system
  6. Summarize the conclusion in my own words

If any of those steps disappear, the interaction usually becomes noisier.

How to know if AI is helping or hurting

A useful self-check is to ask after a work block:

If the answer to the last question is no, the work is not finished.

A minimal logging pattern

If you want to be deliberate without adding much overhead, log each meaningful AI use in one short line:

That is enough to reveal patterns over time.

You may notice that AI is very strong in some categories and consistently disappointing in others. That is useful operational knowledge.

Where I land on this

The failure mode I am most careful about is not bad AI output. It is reaching for AI before I have thought clearly.

Keeping my own reasoning first is less about distrust and more about staying the person who actually understands the work. AI can accelerate structure and widen options — but judgment, framing, and final understanding should stay mine.