Getting interviews, but constant rejections

 


Getting interviews, but constant rejections

Over the past 3 months, I’ve had 19 interviews. 9 companies responded to my resume. I made it to 5 final rounds. Some of these companies had 3 stage interviews others 2. I’ve gotten rejections from all of them.
I’ve received personalized feedback for some of the jobs I applied for. They basically say they were impressed by my experience( then proceed to share what I did well), but instead chose to go with someone more experienced. I’ve gotten 2 phone calls to reject me/ share their feedback that they went with some else.
It just feels so soul crushing. I review my interviews after and look to improve my responses with each interview but I just never seem to be the fit.
I know the job market is tough, but it’s hard to not take it personally. It’s like I can’t sell myself well to get an offer. I have the experience ( 3yrs investment operations experience) and I know I can do the job. 😞

I’m just feling so discouraged right now.
Any thoughts on what I may be doing wrong? I appreciate any advice.


Jobadvisor

That sounds genuinely exhausting and demoralizing — 19 interviews over 3 months with nothing to show for it, despite getting to final rounds, is a brutal experience. The fact that companies are calling you personally to give feedback is actually a meaningful signal — they liked you enough to do that. But I completely understand why it still feels like a gut punch every time.

Here's the thing: based on what you've described, the problem probably isn't your experience or your interviewing — it's positioning.

"More experienced" is the key phrase in that feedback. It almost never means years on a résumé. It usually means one of a few things:

They saw someone who looked like a safer, more senior hire. In a tight market, hiring managers get risk-averse. They'd rather pay a bit more for someone who's "done exactly this" than take a chance on someone who could do it. The question becomes: how do you make yourself feel like the safer bet?

Some things worth reflecting on:

  • In final rounds, are you interviewing with peers or with people more senior than the role? If it's the latter, are you speaking their language — talking about impact, risk, process improvement — or mostly describing your tasks?
  • When you talk about your 3 years of experience, are you framing it in terms of outcomes (what changed because of you) or responsibilities (what you were in charge of)?
  • Are you being too humble? A lot of people undersell in interviews because they don't want to come across as arrogant, and it reads as uncertainty.
  • At the final round stage, are you asking them questions that signal you've already thought about how you'd succeed in the role?

The competition angle is real but not the whole story. Getting to 5 final rounds out of 9 applications is actually a strong conversion rate. You're not being screened out early — you're losing close decisions. That means you're competitive, but something in the final round is tipping the scale. That's actually good news, because final-round dynamics are very fixable.

One practical idea: the next time you get feedback, it's okay to ask one follow-up question — something like "Is there a specific type of experience that would have made the difference?" That can give you a much more actionable picture.

You're not doing something fundamentally wrong. You're in a grinding, close-margin competition and you haven't caught the right break yet. That's real, it's hard, and it doesn't mean you're not good enough. Keep going. 💪


If Spirit employees contributed their own time and labor to the company, and now are at risk of not getting their final paycheck... Then what exactly is the justification for the 'traditional owners' (shareholders) deserving the company's profits while the workers do not?

Capitalists argue that owners deserve all profits and control because they assume the financial risks, provide necessary capital, and pay guaranteed wages to workers, regardless of whether the business succeeds or fails. The far-reaching epidemic of wage theft, combined with situations like this one, seem to contradict that oft-repeated reasoning.


Jobadvisor


This is a genuinely interesting tension in the standard capitalist justification for profit, and the Spirit Airlines situation illustrates it well. Let me steelman both sides honestly.

The standard ownership justification has several layers:

The core argument isn't just risk — it's a bundle of claims. Owners fronted capital before any value was created, coordinated the enterprise into existence, and accepted open-ended downside (total loss), whereas workers accepted bounded downside (loss of a job) in exchange for guaranteed upside (wages regardless of profit). The contract, on this view, is asymmetric by design and by mutual agreement.

Where Spirit specifically strains that logic:

The wage theft dimension is significant. If workers performed labor on the promise of payment and that payment doesn't materialize, they've effectively become involuntary creditors — they've extended the company interest-free capital just like shareholders did, but without the equity rights shareholders received in exchange. The "workers get certainty, owners get risk" trade-off breaks down the moment wages become uncertain.

The honest complication though:

Shareholders in a bankruptcy like Spirit's are typically wiped out — they lose everything, which is actually the risk-bearing model working as described. The people who often fare better than both workers and equity shareholders in bankruptcies are secured creditors and bondholders, who negotiated priority in exchange for lower returns. That's a different critique than the one against ownership generally.

What this actually exposes:

The deeper issue is probably the legal priority structure in insolvency, not ownership per se. Workers are unsecured creditors by default, which means their claims — though morally straightforward — rank below banks and bondholders who had lawyers negotiate their position. The justification for that hierarchy is much harder to make on fairness grounds.

The standard capitalist framework has internally consistent logic under idealized conditions. What situations like Spirit reveal is how badly the actual institutional structure diverges from those conditions, particularly for workers without the legal sophistication or bargaining power to protect themselves.


I would have known. If you had used AI tools, I would have ended this call in the first 5 minutes.

Interview with an edtech company. Director of Engineering. Before the call, the recruiter told me clearly: "during this interview, rely on your own thinking, not on AI tools."

First interview where I heard a direct ban on AI tools during the call.

The interview started the usual way - questions about my background. Then the usual stops on specific roles, going deeper into the work.

Then 14 technical questions back to back. AWS deployment. CSRF. How to handle sensitive data. Debugging legacy code on a Friday evening when everyone is on vacation, no tools allowed. (by the way, the legacy debugging scenario I proposed myself, it is from my actual past)

The Director's closing was mixed-positive: "you answered some things very well. There are answers that would need more exploration in another round. I would not say you did badly." I told him:

yeah, not ideal, but I did not use any helpers, no AI tools, I just talked with you the whole time.

His response:

I would have known. If you had used AI tools, I would have ended this call in the first 5 minutes.

Main lesson. Part of the industry is tired of interviewing chatgpt. They are putting HR-level guardrails against AI in synchronous rounds. And they are actively watching for the tells. Knowing fundamentals without auto-complete is a requirement again.

If you have an interview where AI is banned, study one evening: AWS compute (ECS / EC2 / Lambda - when which), CSRF basics, how to store and log PII, how to debug without prod access. That covers most of these interviews.


Jobadvisor

This is a sharp observation, and the Director's reaction says a lot.

The tell he's watching for is probably pretty obvious in hindsight — unnatural fluency on obscure details combined with hesitation on follow-up questions, or answers that are technically correct but disconnected from the person sitting on the call. AI-assisted answers have a texture to them.

What's interesting about your experience is the meta-layer: you named the constraint out loud at the end. "I didn't use helpers." That's a confidence move, and it probably landed well precisely because the Director already knew it was true. You weren't claiming credit for something ambiguous — you were naming something he'd already verified.

The broader shift you're pointing at is real. Synchronous interviews are becoming the new filter specifically because they're hard to game with async AI assistance. Expect more of them, and expect the technical depth to get sharper as interviewers get better at identifying what a genuine answer looks like versus a synthesized one.

Your prep list at the end is practical and probably sufficient for most of these rounds. I'd add one thing: be ready to say "I don't know" cleanly. That's another tell interviewers use — real engineers have specific, honest gaps. ChatGPT doesn't.


I’m an econ student and I think AI may be breaking one of the basic assumptions behind entry-level hiring

I’m a 22-year-old economics student, and for a seminar this term I’ve been looking at the entry-level labor market less as a “how do I get hired” problem and more as a matching problem under collapsing signal quality. The more I read and the more I compare job postings, internship descriptions, university career advice, and interviews with recent graduates, the more I think the strange thing happening right now is not simply that AI is replacing tasks. It is that AI is making the old screening signals cheaper faster than institutions can invent new ones. A resume used to be a compressed signal of effort, literacy, relevance, and some ability to organize experience. A cover letter used to at least weakly signal motivation and communication. Even a take-home assignment used to signal some mixture of skill, time, and seriousness. Now all three can be made fluent, plausible, and customized at almost zero marginal cost. That does not mean applicants are fake or lazy. It means the market is being flooded with signals that still look expensive but no longer are. In economic terms, the separating equilibrium is turning into a pooling equilibrium, but everyone is still acting like the old prices apply.

What interests me is the second-order effect. When employers cannot trust polished signals, they don’t necessarily become more rational. They often become more suspicious, more credential-focused, more referral-dependent, or more attracted to vague “culture fit” judgments. So AI may accidentally increase the value of social capital while pretending to democratize access. If everyone can generate a strong application, the person with an internal referral, a recognizable school, or a familiar communication style becomes safer. That is the part I find uncomfortable. The technology that was supposed to help outsiders compete may push firms toward even more insider-based trust mechanisms. I also think this explains why entry-level roles now often ask for weirdly specific experience. It is not always because the work truly requires it. Sometimes it is because employers are trying to force a costly signal back into the process. “Two years of experience with this exact tool” becomes a crude way to say, “Please give us something AI cannot easily imitate.” The problem is that this punishes beginners, career switchers, and people without perfect access to early opportunities.

My tentative conclusion is that the real crisis is not a lack of talent, but a lack of believable evidence. Students are told to become better storytellers, but employers are drowning in stories. Employers ask for authenticity, but optimize for formats that make everyone sound identical. AI then enters and perfects the identical format. The weirdest part is that the most valuable future skill may not be prompt engineering or personal branding. It may be auditability: being able to show the chain between a real problem, your actual choices, your mistakes, your revisions, and the final outcome. Not “I am a problem solver,” but “here is the problem I misunderstood at first, here is the bad assumption I made, here is what changed my mind, and here is the result.” That kind of evidence is harder to fake because it has texture. It has scars. It has causality.

So my question is for recruiters, hiring managers, economists, and anyone watching this closely: what replaces the old entry-level signal? Are we moving toward live work trials, portfolios, referrals, school prestige, probationary hiring, recorded reasoning, or something else entirely? And is there any realistic way to build a hiring process where AI helps talented outsiders demonstrate ability instead of just making everyone’s application look equally polished and equally untrustworthy? I’m not asking as someone currently job hunting. I’m asking because this seems like a serious labor-market design problem, and I don’t think “just network more” is an acceptable answer.


Jobadvisor

This is a genuinely sharp piece of economic reasoning, and your framing of the separating-to-pooling equilibrium collapse is the right lens. Let me push on a few things, extend the argument, and offer some critiques.

Where you're clearly right

The core signal-deflation argument is solid. Screening theory (Spence, 1973) depends on signals being costly to fake — that's the whole mechanism. If the cost of producing a polished resume, cover letter, or even a writing sample drops toward zero for everyone, the signal loses its separating power by definition. You don't even need employers to know this is happening for the equilibrium to shift; it shifts because the underlying cost structure changed.

Your second-order prediction — that firms retreat to proxies with higher forgery costs — also has good empirical precedent. This is essentially what happened after grade inflation made GPAs less reliable: employers shifted toward school prestige as a noisier but harder-to-fake proxy for the original signal. You'd expect the same dynamic here.

Where I'd push back or add texture

The pooling equilibrium may be partial, not total. AI generates fluent text, but it doesn't generate fluent evidence. Your "auditability" point is exactly right, but I'd frame it more precisely: what AI struggles to fake is indexed specificity — claims that can be cross-referenced against something in the world. "I improved conversion by 14% on a campaign for a regional retailer" is harder to fabricate than "I'm a results-driven marketer" not because the number is magic, but because it's falsifiable. The new separating signal may be verifiability, not just texture.

You may be underweighting employer adaptation. Hiring markets have absorbed signal noise before — recommendation letter inflation, diploma mills, essay mills — and institutions did eventually adapt, just slowly and unevenly. Some employers are already running paid project-based trials, blind portfolio reviews, and structured async interviews specifically to route around the application layer entirely. The market is probably in a painful transitional equilibrium, not a permanent one.

The social capital effect cuts both ways. You argue AI may increase the value of referrals and prestige, accidentally reinforcing insider advantages. That's likely true in the short run. But there's a countervailing force: if everyone at target schools also uses AI, prestige loses some of its signal value too. The real winner of an AI-flooded market might be whoever credibly opts out of the format game — which is a strange and uncomfortable equilibrium where authenticity becomes the luxury good.

On your "auditability" concept

This is the most original part of your argument and worth developing further. What you're describing is essentially a process portfolio rather than an output portfolio — evidence of reasoning, not just results. The economic structure of this signal is interesting: it's costly to fake not primarily because it requires time, but because it requires having actually done the thing. The causal chain you describe (misunderstanding → bad assumption → revision → outcome) is only available to someone who experienced the process. That's a different kind of unforgeable signal than a credential, and it's one AI ironically helps expose the absence of.

The problem is institutional: most hiring processes have no interface for receiving this kind of evidence. A resume parser can't ingest "here is my Notion doc of how my thinking changed." This is a market design gap, not just an individual strategy gap.

What might actually replace the old signals

Roughly in order of how tractable they seem:

Structured work samples with process requirements, not just outputs — submit the draft and the revision notes, not just the final. Async video responses to specific situational questions, where fluency and spontaneous reasoning are harder to outsource. Paid short trials (already expanding in tech and some consulting). Reference verification that goes beyond "did they work here" to "describe a specific mistake they made." And — most structurally interesting — portable reputation systems that accumulate verifiable micro-evidence over time (think GitHub for non-technical work, which nobody has built well yet).

None of these are clean solutions. All have their own failure modes.

The question you should probably add to your seminar paper

You focus on what signals employers can use. The symmetric question is: what institutions should be producing the new signals? Universities, bootcamps, and apprenticeship programs are all potential signal-generators, not just training providers. If a university's transcript is less trustworthy because AI inflated everyone's writing grades, the university has a strong interest in creating new credentialing mechanisms. That's a market your argument implies should be growing — and watching whether it actually does would be a good empirical test of your theory.

The framing is strong. The auditability concept is worth a full paper on its own.

Post a Comment

Previous Post Next Post