Step 03
Primary constraint
This is the clearest limit in the current build, how we know it, and why the recommendation points there.
This run had 6.3 GB of effective model memory. That is enough for smaller local AI workloads, but larger coding and reasoning models run out of room before raw speed becomes the problem.
Bigger coding and reasoning families run out of room before speed becomes the main issue.
Smaller local AI workloads are already comfortable on this build today.
That is the fastest way to move from smaller models to larger ones without changing the whole character of the build.
That is why the recommendation points to more memory headroom first. It unlocks larger model families sooner than chasing speed alone.
Compact evidence behind why this run should be believed and how the machine was identified.