Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I viewed academia as altruistic and relatively enlightened. And I've certainly met many who live up to that.

I've also occasionally heard of entire academic departments who should be in jail, for being pieces of crud.

Then there's what I'm guessing is the bulk of academia: care/cared about the field and their impact to some extent, try to do their jobs competently, look out for their students, maybe still try to find interest in the work, and operate within whatever hostile politics they're at the mercy of, without being cruddier than they absolutely have to be.

That's not as inspiring as it could be, but it's a lot better than the tech industry overall.



> That's not as inspiring as it could be, but it's a lot better than the tech industry overall.

I regularly see both worlds. What I find more troubling in academia is that it is difficult to openly talk about how flawed the system is, that people make mistakes, that papers have mistakes (the own ones as well as others'). We know all software has bugs; but the code that a PhD student hacks together over night is assumed to be flawless (the more senior people rarely even glance at it), otherwise the paper is all wrong, and papers are set in stone... So I genuinely struggle with the lack of a proper failure culture in academia, as it is designed as a system that is fundamentally geared against openly discussing failure.


I suspect there's many reasons for the field/department cultures.

One of them, which was surprising to me (which I first heard from a friend in a hard STEM field), was what happens when student A's thesis result is found to be wrong due to flawed experiment... but only after student B is well into their own dissertation building upon A's result. Reportedly, everyone involved (A, B, their PI, the department, the university) has incentive to keep quiet about student A's bad result. B has an academic career to move forward, within funding and timeframes, and everyone else cares about reputation and money. And there is only downside for bystanders to complain, especially if it's other students especially vulnerable to retaliation/disfavor.

Another one I've seen, which is less surprising, is when there seems to be a culture of alliance or truce among faculty. So, if someone is misbehaving, or makes a mistake, it's understood that no one is going to call them out or interfere, and no one wants to even know about it more than they have to. In general, no selfish benefit can come from that, but a whole lot of negative feedback can. Mind your own business, glass houses, etc.


> care/cared about the field and their impact to some extent, try to do their jobs competently, look out for their students, maybe still try to find interest in the work, and operate within whatever hostile politics they're at the mercy of, without being cruddier than they absolutely have to be. That's not as inspiring as it could be, but it's a lot better than the tech industry overall.

That's pretty much my experience from 20+ years ago.

One thing that I didn't appreciate when I left the ivory tower was the extent of the replication "crisis."

If other academics can't replicate your work in some esoteric corner of bio research, it's no big deal--some people get burned wasting time, but the research just atrophies in the end.

But in the biotech / pharma industry, we in-licensed a lot of un-replicatable garbage from academia.

And replication was important to us because we actually had to make a drug that was effective (which loosely translates to ... "clinicians must be able to replicate your drug's efficacy.").*

* I'm not sure how true this is anymore, given politicization of regulatory bodies, but it was an eye-opener to me years ago.


If you want to make a company based off a science discovery you have to start by replicating the initial discovery. Most biotech companies die there.


That's not really where most biotechs die.

You don't move a drug into clinical trials unless you can replicate the initial studies, which is almost always "drug x cures y in animal z."

They die when they try to determine if "drug x cures y in humans."


Series A companies die there. That’s why most not largest.


"drug x cures y in humans - without causing z where bad_y - bad_z < ε".


> where bad_y - bad_z < ε".

LOL. That is often the hard part. "We cured your toe fungus. Sorry about the heart attack."


Yeah, I would say that my time in academia disillusioned me somewhat, but not to the level that some people here are expressing. I never got the sense that people were falsifying data, directly (but covertly) backstabbing one another, or anything really awful like that.

But there are plenty of disheartening things that don't rise to that level of actual malfeasance. People get so comfortable in their tenured positions that they can lose touch with reality (e.g., the reality of how difficult their grad students' lives are). Even if they don't engage in actual research misconduct, there's a tendency for people to put their thumb on the scale in various ways (often, I think, without being aware of it), many of them connected to a sort of confirmation bias, in terms of who they think is a "good fit" for a job, what kind of work they want to support, etc. In my experience they are at best dismissive and at worst offended by the idea that maybe the current financial/employment model of higher education isn't the best (e.g., that maybe you shouldn't have a two-tiered system of tenure-track and non-ladder faculty with wildly differing payscales, but rather should just have a larger number of people doing varying amounts of teaching and research for varying but roughly comparable levels of pay).

I felt like virtually everyone I met was in some sense committed to the truth, but often they were committed to their own view of the truth, which was usually a defensible and reasonable view but not the only view, and not as clearly distinct from other reasonable views as they felt it was. And they varied considerably in how much they felt it was acceptable or necessary to engage in minor shenanigans in order to keep moving forward (e.g., to what extent they'd compromise their actual beliefs in order to placate journal editors and get something published).

Also, there is often something endearing about how academics can be genuinely emotionally invested, sometimes to the point of rage or ecstasy, in matters so obscure that the average person wouldn't give them a second thought. It's sort of like finding someone who's a fan of some TV show that ran for 12 episodes in 1983 and is adorably gushy about it. Even the people I met who were quite cognizant of making strategic career moves and other such practical stuff still had a lot of this geeky obsession about them.

A lot of this may vary from one field to another. But on the whole there are many worse people in the world than academics.


As an US undergrad decades ago, at a major (non-elite) research school, I was already discovering these criticisms of the current academic system, in action, way back then. So I don't think we can blame much of any 'fraud' increase going on today on that system. Today, perception of fraud may be on the increase.

(I started to become alert to what that program was really about when I took one of the classes -critical- to my major. It involved a lot of heavy math, and was being taught by a TA with a -very poor- command of the English language. When I complained, my Princeton-grad advisor's reply was 'this course is to separate the men from the boys'. Yeah, thanks pal.

So far as I know, he published very few cited papers.)


How is it better than the tech industry?


Well, the amount of money being wasted is generally smaller, and often the results are not harming hundreds of millions of people around the world. (But it depends on the field.)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact