problem-origin builders
arguing with a bot
I applied for a Forward Deployed Engineer role at Lovable and ran my answers through GPT afterward to see what it thought. I often do this as a form of validation, why exactly is probably something to do with insecurity or propping myself up. GPT’s feedback was immediate and predictable, things like compress your responses and an annoying reminder that recruiters skim hundreds of applications and don’t often care about specificity, preferring high-level punchy metrics, which left me feeling like I should remove my personality from responses to questions. GPT specifically mentioned tightening my narrative, dropping any external links that point to my work (because apparently they look scammy), and lose emojis (because they seem unprofessional).
I told GPT I used to be a recruiter and a hiring manager, and that its suggestions would result in a profile that I would have passed on when I was reviewing candidates, effectively pointing out it probably doesn’t know what it is talking about. I explained when I was evaluating applications, the thing I hated most was polished emptiness with metrics I couldn’t substantiate from their included materials. When this happened, I often did my own research on the candidate if they were good enough, but I would have preferred more detail in their application and not doing so left me feeling that working with them would be like pulling teeth. Put another way, candidates who said all the right things and revealed nothing about how they actually think often gave me the “ick” and left me feeling highly skeptical.
GPT pushed back. It said I was probably right about substance mattering more in certain environments, and that my answers might land well with founders or technical leaders who read past the first paragraph. Then it told me the problem is that a recruiter might see my resume titles and think “program manager” instead of “engineer.” and questioned my ability to deliver technical products because the “depth” of what I can build was unclear from the documents and overshadowed by my previous roles that are more operational. It scored my likelihood of getting an offer at 15 to 25 percent and told me I could bump that to 30 or 40 if I “optimized properly.”
I said something like, “well I guess that is why I am destined to build on my own and work for myself because that is me without any filter or other bullshit and if they don’t like it, fuck um.” It’s funny reflecting on this because I know how technologies like GPT work, there’s nothing there other than pattern matching, pattern completion, and vector matrices, but all the same, I’m having a full blown argument about hiring and recruiting processes with it, which is I guess useful though probably not what OpenAI or Anthropic intended.
apparently there’s a useful pattern to contemplate
Interestingly, GPT asked me a few follow up questions, specifically:
“When you applied to Lovable, were you primarily hoping for: A, a job you’d genuinely want, or B, a conversation with interesting people, even if it didn’t turn into a job?” “When you start projects like Fetch Quests or your protocol work, what usually comes first for you? A, a technical primitive (something new becomes possible) B, a social/economic mechanism you want to test C, a user problem D, pure curiosity / exploration”
When I answered the second question, GPT identified a pattern across my projects that I had not articulated clearly before. It coined a build archetype, “problem-origin building,” and mentioned that is what I am. Most hiring pipelines are designed to evaluate people who take defined problems and implement solutions inside existing structures. Those pipelines test for implementation skill, stack familiarity, and the ability to collaborate within a hierarchy that already knows what it wants built, as a result interviews, system design rounds, behavioral questions, etc., all assume an organization has decided which problems matter, i.e., a candidate’s job is to execute, which I agree with but a candidate’s job is ALSO to evaluate, understand, be curious and solve systems-level problems.
I clarified that I do not work like a cog in a wheel, and never will, at which point GPT asked me to explain why I build each project, so I did. Here’s what I said:
“I built status.health because I was afraid of and tired of getting STDs from hookups that were not responsible enough to get tested regularly, I built Autophage because I dislike how traditional monetary systems trap people into building for a select few that got lucky either from generational wealth or from being in the right place at the right time, I built this blog and continue to post because I want to share my thoughts, and because I want a place to document my work, and finally, I am currently working on Fetch Quests because there needs to be a way to make money without all the hoopla of the hiring process, evidenced from going back and forth about a job application I had already submitted.”
The act of explaining each project did help clarify my narrative as a builder, and started to strengthen the argument of a problem-origin builder, and how I should probably just continue building versus wasting my time on applications like the one for Lovable or the numerous LinkedIn DM outreaches I spend time crafting that mostly go into the void. It’s sad because I do like working with other people and working for an organization like GitHub before the Microsoft acquisition or the perks of Apple, but as hiring tools integrate with AI frameworks and are further automated, I can see this divide between hiring for an assembly worker/cog in the wheel versus hiring for a person that can work autonomously is getting wider and wider, just like the wealth gap.
why being a problem-origin builder makes you invisible
GPT gave the archetype “problem-origin builder” a second label, “technical ecosystem builder,” and explained it meant someone who shows up when a technology is new or complex and the job becomes helping other people figure out how to use it, building tools around it, translating it into something operational. That is literally what I did at GitHub when I helped organizations migrate and built integration tools for them, same for Coinbase when I built endpoint management infrastructure that ran across thousands of machines and Parity through the Pioneers Prize. Those are all forward deployed engineering by a different name, which is exactly what the Lovable role is, and GPT saw this immediately, but it was also clear from the chat with GPT that it is unlikely a recruiter or someone reading my application would spend enough time to “read between the lines.” Specifically, GPT said people would probably not read my resume for an engineering role because my resume titles include roles like Technical Recruiter, Technical Program Manager, Engineering Manager, and Founder; a recruiter scanning is going to think “operations person” before they think “builder,” which I think is their problem, not mine.
I thought this was incredibly ironic and called GPT out on it. “You literally just told me I am best suited for technical environments and in the same breath questioned whether I can actually build things, what the fuck?” GPT acknowledged, of course, and tried to explain it was describing “perception, not reality,” with a typical antithetical, and that my career clearly shows I build but the artifact of my resume “does not scream it.” Fair enough I guess. But that is kind of the whole point isn’t it. If the thing designed to represent me cannot represent what I actually do and who I am, then the system reading that thing is broken, at least in my opinion.
GPT also brought up something I found useful as a former recruiter, this idea that there are two very different types of people evaluating your application. The first type, and the most common, is what it called signal-compression evaluators, people who want concise answers, quick signal, easy comparison across a pile of 300 applications. Their whole vibe is “give me the signal fast so I can move on.” The second type is evidently rarer (no surprise) and are called “substance evaluators,” or people who prefer to see how a person thinks and so look for depth, curiosity, and original reasoning. For this type of reviewer, over-compressed answers are a negative signal because they lack personality, and substance, and feel generic, manufactured, and like the person ran their responses through a prompt, which nowadays is probably true. It makes sense then that my answers on applications and my applicant materials (resume, blog, etc) are clearly written for the second type of reviewer, which is basically old me. The problem is the first type is almost always the one who reads filters your application because companies hire recruiters that are like that and recruiters are like that because they are simply lazy. Recruiters like this gate your application from moving on, usually to the second type of reviewer (which is more often decision makers like hiring managers), so they never get a chance to review candidates they actually need and probably want, causing a negative feedback loop for literally everyone involved in the hiring process, and results in the high turnover and low job satisfaction that is so widely reported, or that’s at least my take as someone who has done all the things.
I already knew most of this intuitively from recruiting and being a hiring manager and forgot it intentionally. Reading it again laid out by GPT did make me think about it more deeply again and reminded me of a study I have seen referenced a lot. Bertrand and Mullainathan sent identical resumes with different names and found huge callback gaps based on perceived race alone, credentials held constant (NBER Working Paper 9873). If the filter cannot even process names without bias it is definitely not equipped to evaluate someone whose career does not follow the expected ladder. Companies optimize for speed and pattern matching and then wonder why all their teams look and think the same, “culture fit” is often touted as an important factor, but applied it’s like saying “we hired people who remind us of ourselves.” Companies spend millions on diversity and inclusion programs that later get deprecated, so clearly they see the problem, they just have blinders on what the actual problem is or the right people don’t care 🤷
performing for a filter that can’t see you has a cost
GPT actually scored my three application responses, which was both helpful and annoying. Its response immediately made me skeptical since it provided a table in less than 5 seconds. Even if those tokens had zero latency, it certainly didn’t think “critically,” I thought, then I realized “Thinking Mode” wasn’t on… so I re-enabled it, and got a better result. Why I want to join Lovable got a 6.5 out of 10, it said I praised the company but did not show deep enough thinking about their core themes, AI-native software, intelligence-first development, whatever. My answer about the most impressive thing I have done got an 8, which felt fair because that one was genuinely strong, I talked about building a startup solo, designing an AI pipeline, obtaining a patent without a lawyer, and several of the learnings I took away from the process. Then the “anything else you’d like us to know” question got a 5 out of 10 because I mentioned something I am not the best at. GPT told me never admit weakness in an application, called it a “huge mistake,” and I remember thinking wow, so I should just lie then? It also flagged my emojis 😊 ✨ and said engineers sometimes cringe at them, and that my tone was too conversational and blog-like. Applications should be tight and confident, it said. Right, because nothing says “I am confident” like sanding down your personality into something that reads like everyone else’s LinkedIn about section and is probably bullshit.
I told GPT that maybe I do not need its bandwidth optimization because I have more bandwidth than most people, I felt cocky doing this but if it felt the need to point out bandwidth optimization then I felt the need to point out that I clearly don’t need it if that is my style. Also its whole analysis was ironic and I pointed that out, you told me I thrive in technical environments and then questioned if I can actually build stuff? If I was not open or willing to learn how systems work I would be selling burgers at McDonald’s as my Algebra teacher from high school used to threaten me, not sitting here with a patent and four shipped products and a blog full of spec work.
GPT took that well, which I appreciated, and came back with something I did not expect. It said “the real question is not corporate versus independent, it is what environment best amplifies how you naturally work.” It then started to puff me up, suggesting I am more founder-type than employee-type, and not in the solo indie hacker sense, more like “the kind of person who builds platforms and protocols and economic systems,” which just made me roll my eyes. It said people with my pattern(s) tend to end up as technical founders creating new primitives, or in platform ecosystem roles in developer relations and applied research, or in frontier R&D environments where the job is basically “figure out what this technology can actually become.” I do not know about all that but I suppose the pattern tracks when I look at my own history; it’s probably just regurgitating to keep me happy, c’est la vie.
Society uses employment as a legitimacy proxy, people ask what you do at parties and if the answer is not a company name they kind of glaze over, and when you are building something that does not fit neatly into an org chart the social pressure is constant and the economic risk is extremely real. Exploration takes time. Building from broken systems has a longer loop before it looks like anything to people on the outside, and it usually costs lots of money. You sacrifice somewhere, with your time or with your money. If payment and legitimacy are locked behind interviews then the people doing the most interesting frontier work are pushed toward either conforming to get a paycheck or running out of money, and I have been way too close to the second option more times than I would like to admit. There needs to be a better solution, and I do think there could be. Working with others is more fun, and companies would benefit, too.
flipping hiring on its head
So this is where Fetch Quests comes in, and I know I have written about it before but the GPT conversation clarified something for me about why I am actually building it and not just what it does. The problem-origin builder archetype is exactly who I want Fetch Quests to serve, because those are the people most consistently overlooked by the current system, and because it solves the problem for me, personally. Fetch Quests, much like the other big projects I have embarked on is my attempt to circumnavigate a problem entirely by redesigning it from first-principles. It should be possible to just do work, let it speak for itself, and get paid automatically.
GPT made one more observation toward the end of our conversation that I keep coming back to, and it is probably the thing that made me want to write this post. It said that if Fetch Quests succeeds, my resume suddenly reads as “visionary cross-disciplinary builder.” If it does not succeed, the same resume with the same work and the same trajectory reads as “unusual career path.” That really got to me because it captures the whole problem perfectly. Outcomes determine how people read hiring signals retroactively. If you win they call you a visionary and if you lose they call you unfocused, but what they are missing is that the work itself does not change at all between those two readings, and often luck is the difference. Learning outcomes are really the most important afterall. The result of work should determine reputation as opposed to external institutions deciding if they want to validate your ability with a literal stamp of approval and a salary.
food 4 thought
The more I think about the problem-origin builder archetype the more it resonates. These are people who see broken systems and start building before anyone asks them to, and the current hiring process has no idea what to do with them because their work does not fit into the boxes the process was designed to evaluate. If Fetch Quests can give those builders a place to prove themselves through work, then maybe the next person who argues with a bot about their job application will have somewhere better to go. It is also possible I have no idea what I am talking about and this is all a farce, but only time will tell. What do you think?
share your thoughts
Have feedback on this post? I'd love to hear from you.
As always, 'twas nice to write for you, dear reader. Until next time.