Hacker Newsnew | past | comments | ask | show | jobs | submit | andrehacker's commentslogin

Any discussion about Maze algorithms cannot be complete without a reference to the 1982 endless Maze algorithm used in the "Entombed" Atari game.

Many great articles about this can be found like:

https://www.gamesthatwerent.com/2024/01/the-endless-maze-alg...

https://ieee-cog.org/2021/assets/papers/paper_215.pdf


Ah, the early days of AI.

If a book or movie is ever made about the history of AI, the script would include this period of AI history and would probably go something like this…

(Some dramatic license here, sure. But not much more than your average "based on true events" script.)

In 1957, Frank Rosenblatt built a physical neural network machine called the Perceptron. It used variable resistors and reconfigurable wiring to simulate brain-like learning. Each resistor had a motor to adjust weights, allowing the system to "learn" from input data. Hook it up to a fridge-sized video camera (20x20 resolution), train it overnight, and it could recognize objects. Pretty wild for the time.

Rosenblatt was a showman—loud, charismatic, and convinced intelligent machines were just around the corner.

Marvin Minsky, a jealous academic peer of Frank, was in favor of a different approach to AI: Expert Systems. He published a book (Perceptrons, 1969) which all but killed research into neural nets. Marvin pointed out that no neural net with a depth of one layer could solve the "XOR" problem.

While the book's findings and mathematical proof were correct, they were based on incorrect assumptions (that the Perceptron only used one layer and that algorithms like backpropagation did not exist).

As a result, a lot of academic AI funding was directed towards Expert Systems. The flagship of this was the MYCIN project. Essentially, it was a system to find the correct antibiotic based on the exact bacteria a patient was infected with. The system thus had knowledge about thousands and thousands of different diseases with their associated symptoms. At the time, many different antibiotics existed, and using the wrong one for a given disease could be fatal to the patient.

When the system was finally ready for use... after six years (!), the pharmaceutical industry had developed “broad-spectrum antibiotics,” which did not require any of the detailed analysis MYCIN was developed for.

The period of suppressing Neural Net research is now referred to as (one of) the winter(s) of AI.

--------

As said, that is the fictional treatment. In reality, the facts, motivations, and behavior of the characters are a lot more nuanced.


Not that wrong.

I went through Stanford CS when those guys were in charge. It was starting to become clear that the emperor had no clothes, but most of the CS faculty was unwilling to admit it. It was really discouraging. Peak hype was in "The fifth generation: artificial intelligence and Japan's computer challenge to the world" (1983), by Feigenbaum. (Japan at one point in the 1980s had an AI program which attempted to build hardware to run Prolog fast.)

Trying to use expert systems for medicine lent an appearance of importance to something that might work for auto repair manuals. It's mostly a mechanization of trouble-shooting charts. It's not totally useless, but you get out pretty much what you carefully put in.


Not exactly an expert system, but during my PhD I contributed to a natural language parsing/generation system for Dutch written mostly in Prolog with some C++ for performance reasons. The only statistical component was a maxent ranker for disambiguation and fluency ranking.

No statistical dependency parser came near it accuracy-wise until BERT/RoBERTa + biaffine parsing.


Oh yeah, the good hand crafted grammars are really good. For my PhD I worked in a group that was deep in the DelphIN/ERG collaboration, and they did some amazing things with that.

To be fair the performance of rules or Bayesian networks or statistical models wasn't the problem (performance compared to existing practice). DeDombal showed in 1972 that a simple Bayes model was better than most ED physicians in triaging abdominal pain.

The main barrier to scaling was workflow integration due to lack of electronic data, and if it was available, interoperability (as it is today). The other barriers were problems with maintenance and performance monitoring, which are still issues today in healthcare and other industries.

I do agree the 5th Generation project never made sense, but as you point out they had developed hardware to accelerate Prolog and wanted to show it off and overused the tech. Hmmm, sounds familiar...


Here are more expansive reflections on FGCS from Alan Kay and Markus Triska: https://www.quora.com/Why-did-Japan-s-Fifth-Generation-Compu...

The paper of Ueda they cite is so lovely to read, full of marvelous ideas:

Ueda K. Logic/Constraint Programming and Concurrency: The hard-won lessons of the Fifth Generation Computer project. Science of Computer Programming. 2018;164:3-17. doi:10.1016/j.scico.2017.06.002 open access: https://linkinghub.elsevier.com/retrieve/pii/S01676423173012...


The early history of AI/cybernetics seems poorly documented. There are a few books, some articles and some oral histories about what was going on with McCulloch and Pitts. It makes one wonder what might have been with a lot of things. Including if Pitts had lived longer, been able to get out of the rut he found himself in the end (to put it mildly) and hadn’t burned his PhD dissertation, but perhaps one of the more interesting comments that is directly relevant to all this lies in this fragment from a “New Scientist” article[1]:

> Worse, it seems other researchers deliberately stayed away. John McCarthy, who coined the term “artificial intelligence”, told Piccinini that when he and fellow AI founder Marvin Minsky got started, they chose to do their own thing rather than follow McCulloch because they didn’t want to be subsumed into his orbit.

[1] https://www.newscientist.com/article/mg23831800-300-how-a-fr...


The early history of AI/cybernetics seems poorly documented.

I guess it depends on what you mean by "documented". If you're talking about a historical retrospective, written after the fact by a documentarian / historian, then you're probably correct.

But in terms of primary sources, I'd say it's fairly well documented. A lot of the original documents related to the earlier days of AI are readily available[1]. And there are at least a few books from years ago that provide a sort of overview of the field at that moment in time. In aggregate, they provide at least a moderate coverage of the history of the field.

Consider also that the term "History of Artificial Inteligence" has its own Wikipedia page[2] which strikes me as reasonably comprehensive.

[1]: Here I refer to things like MIT CSAIL "AI Memo series"[3] and related[4][5], the Proceedings of the International Joint Conference on AI[6], the CMU AI Repository[7], etc.

[2]: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

[3]: https://dspace.mit.edu/handle/1721.1/5460/browse?type=dateis...

[4]: https://dspace.mit.edu/handle/1721.1/39813

[5]: https://dspace.mit.edu/handle/1721.1/5461

[6]: https://www.ijcai.org/all_proceedings

[7]: https://www.cs.cmu.edu/Groups/AI/html/rep_info/intro.html


If a book or movie is ever made about the history of AI, the script would include this period of AI history and would probably go something like this…

I would love to see a "Halt and Catch Fire" style treatment of this era.

Marvin Minsky, a jealous academic peer of Frank, was in favor of a different approach to AI: Expert Systems. He published a book (Perceptrons, 1969) which all but killed research into neural nets. Marvin pointed out that no neural net with a depth of one layer could solve the "XOR" problem.

I think a lot of people have an impression - an impression that I shared until recently - that the Perceptrons book was a "hit piece" aimed at intentionally destroying interest in the perceptron approach. But having just finished reading the Parallel Distributed Processing book and being in the middle of reading Perceptrons right now, I no longer fully buy that. Now the effect may well have been what is widely known. But Minsky and Papert don't really seem to be as "anti-perceptron" as the "received wisdom" suggests.


Don’t attribute to jealousy that can be adequately explained by vanishing gradients.

BTW the ad hoc treatment of uncertainty in Mycin (certainty factors) motivated the work of Bayesian network.


"AI" is too much of a broad umbrella term of competing ideas, from symbolic logic (FOL, expert systems) to statistical operations (NNs). It's clear today that the latter has won the race, but ignoring this history doesn't seem to be a very smart move.

I'm in no way an expert but I feel that today's LLMs lack some concepts well known in the research of logical reasoning. Something like: semantic.


AI is a broad field because intelligence is broad field.

And what's remarkable about LLMs is exactly that: they don't reason like machines. They don't use the kind of hard machine logic you see in an if-else chain. They reason using the same type of associative abstract thinking as humans do.


Surely "intelligence" is a broad field... i might not be so that great at it, but i hope that's ok.

"[LLMs] reason using the same type of associative abstract thinking as humans do": do you have a reference for this bold statement?

I entered "associative abstract thinking llm" in a good old search engine. The results point to papers rather hinting that they're not so good at it (yet?), for example: https://articles.emp0.com/abstract-reasoning-in-llms/.


I don't have a single reference that says outright "LLMs are doing the same kind of abstract thinking as humans do". Rather, this is something that's scattered across a thousand articles and evaluations - in which LLMs prove over and over again that they excel at the cognitive skills that were once exclusive to humans - or fail at them in ways that are amusingly humanlike.

But the closest thing is probably Anthropic's famous interpretability papers:

https://transformer-circuits.pub/2024/scaling-monosemanticit...

https://transformer-circuits.pub/2025/attribution-graphs/bio...

In which Anthropic finds circuits in an LLM that correspond to high level abstracts an LLM can recognize and use, and traces down the way they can be connected. Which forms the foundation of associative abstract thinking.


The Atari 800 came a bit later (1979), the 3 discussed all were introduced in 1977. 1979 was also when the TI-99/4 came out which eventually became popular when the price was dropped below cost.


>> Patients tend to find it jarring if the lens is perfectly clear.

Yeah, funny. Light it up ! For.. reasons.. I only had one eye done for a bit and, boy, did the world look different with the old and new lens. Christmas trees, for example, either looked like lit with lemon lights or bright lights. I decided to do the other eye too and the world looks bright again... from both eyes.


Silly remark, but talking about "Svelte", I can't parse the picture with Mike Markkula: the Apple II in the front is not exactly a small computer: it is a full size keyboard with several inches of space between the keyboard and the edges so either I don't understand the basics of perspective or this is a scaled down model, no ?


Odd perspective, I guess? Here's a different angle: https://techland.time.com/2012/04/16/photos-the-apple-ii-tur...


Great article. It’s wild to look back at 1977/1978 and realize how suddenly the personal computer era exploded into the mainstream. The PET, TRS-80, and Apple II all hit the market within months of each other, and while hobbyists had already been tinkering with machines like the Altair, IMSAI, KIM-1, and Apple I, this was the moment computers truly became “consumer” products.

From a technical perspective, the timing made sense—there was a foundation of microprocessor-based systems and a growing community of enthusiasts. But for the general public, it felt like computers went from obscure to omnipresent overnight. They were suddenly on TV, in magazines, featured in books, and even depicted in movies and shows. That cultural shift was massive. For many of us, it marked the beginning of having computers in our homes—something that’s never changed since.

I appreciated the article’s attention to detail too. The bit about the TRS-80 monitor being repurposed from an existing product (with a "Mercedes Silver" color to boot), and the PET’s sheet metal casing being a practical choice rather than a design one—those are the kinds of behind-the-scenes decisions that rarely get spotlighted but say a lot about how fast things were moving back then.


>> One of my fondest memories was buying the book "The Elements of Computing Systems"

>> by Nisan and Schocken, and implementing a 4-bit CPU in Minecraft.

You confused me there, the book doesn't cover Minecraft, you did that yourself after reading the book, got it.

The book is absolutely fantastic, it is the basis for the "From Nand to Tetris" courses: https://www.nand2tetris.org/

I haven't digested it in full and with a title like that and the boring cover I always have to scramble to find it when I got a few minutes (What is that "Nand to Tetris" book called again?)


Yes, kind of...

/path/to/firefox --window-size 1700 --headless -screenshot myfile.png file://myfile.html

Easy, right ?

Used this for many years... but beware:

- caveat 1: this is (or was) a more or less undocumented function and a few years ago it just disappeared only to come back in a later release.

- caveat 2: even though you can convert local files it does require internet access as any references to icons, style sheets, fonts and tracker pixels cause Firefox to attempt to retrieve them without any (sensible) timeout. So, running this on a server without internet access will make the process hang forever.


It.. depends.

Historically NFS has had many flaws on different O/S-es. Many of these issues appear to have been resolved over time and I have not seen it being referred to as "Nightmare File System" for decades.

However, depending on many factors NFS may still be a bad choice. In our setup, for example, using a large SQLite database through NFS turns out to be up to 10 times as slow as using a "real" disk.

The SQLite FAQs warn about bigger problems than slowness: https://www.sqlite.org/faq.html#q5


So there's nothing wrong with NFS: people just remember old, buggy implementations. Do you think TernFS is somehow with these old bugs?


It sounds like you're saying it use to be bad (fair enough) and there are use cases where it's bad (also fair enough). But I feel like that describes most software as it goes through growing pains and people figure out where it's useful.


That is an excellent recommendation. For Operating Systems anything Andy Tanenbaum did is world class.

This made me look up what he has been up to, there is a 2023 edition of "Modern Operating Systems" which covers cloud virtualization and Android along with everything that came before, hm, tempting.


We really need a Tech Writer Hall of Fame. W. Richard Stevens, Andrew Tanenbaum, P.J. Plauger. Others?


Kernighan!


And how can we leave out the OG of tech writers: Donald Knuth. He got a bit distracted by developing TeX but he got a well deserved Turing award for the series.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact