Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What was being a software developer like about 30 years ago?
297 points by kovac on Oct 31, 2022 | hide | past | favorite | 467 comments
I'm curious what it was like to be a developer 30 years ago compared now in terms of processes, design principles, work-life balance, compensation. Are things better now than they were back then?



It was great. Full stop.

A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Starting in 1986 I worked on bespoke firmware (burned into EPROMs) that ran on bespoke embedded hardware.

Some systems were written entirely in assembly language (8085, 6805) and other systems were written mostly in C (68HC11, 68000). Self taught and written entirely by one person (me).

In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

Bugs in production were exceedingly rare. The relative simplicity of the systems was a huge factor, to be sure, but knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

Schedules were no less stringent than today; there was constant pressure to finish a product that would make or break the company's revenue for the next quarter, or so the company president/CEO repeatedly told me. :-) Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).


> In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

And the fact that you couldn't just shit another software update into the update server to be slurped up by all your customers meant you had to actually test things - and you could easily explain to the bosses why testing had to be done, and done right, because the failure would cost millions in new disks being shipped around, etc. Now it's entirely expected to ship software that has significant known or unknown bugs because auto-update will fix it later.


It isn't right to consider that time as a golden age of software reliability. Software wasn't less buggy back then. My clear recollection is that it was all unbelievably buggy by today's standards. However things we take for granted now like crash reporting, emailed bug reports, etc just didn't exist, so a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did. Maybe it felt like the results were reliable but really you were often just in the dark about whether people were experiencing bugs at all. This is the origin of war stories like how Windows 95 would detect and effectively hot-patch SimCity to work around memory corruption bugs in it that didn't show up in Windows 3.1.

Manual testing was no replacement for automated testing even if you had huge QA teams. They could do a good job of finding new bugs and usability issues compared to the devs-only unit testing mentality we tend to have today, but they were often quite poor at preventing regressions because repeating the same things over and over was very boring, and by the time they found the issue you may have been running out of time anyway.

I did some Windows 95 programming and Win3.1 too. Maybe you could fully understand what it was doing if you worked at Microsoft. For the rest of us, these were massive black boxes with essentially zero debugging support. If anything went wrong you got either a crash, or an HRESULT error code which might be in the headers if you're lucky, but luxuries like log files, exceptions, sanity checkers, static analysis tools, useful diagnostic messages etc were just totally absent. Windows programming was (and largely still is) essentially an exercise in constantly guessing why the code you just wrote wasn't working or was just drawing the wrong thing with no visibility into the source code. HTML can be frustratingly similar in some ways - if you do something wrong you just silently get the wrong results a lot of the time. But compared to something more modern like JavaFX/Jetpack Compose it was the dark ages.


I'm reminded of the Windows 95 uptime bug https://news.ycombinator.com/item?id=28340101 that nobody found for years because you simply couldn't keep a Windows system up that long. Something would just crash on you and bluescreen the whole thing, or you needed to touch a mandatory-reboot setting or install some software.


running FF on a windows 11 flagship HPE OMEN gaming laptop right now and this bitch crashes at LEAST once a day.


I get forced restarts on Windows 10 due to .net updates only. These tend to ensure applications that ran on previous CLR cannot run until the shutdown thing does the job of rebuilding everything, and it's not done online.


Reliable and buggy can go together - after all, nobody left their computer running for days on end back then, so you almost always had a "fresh slate" when starting up. And since programs would crash, you were more trained to save things.

The other major aspect was that pre-internet, security was simply not an issue at all; since each machine was local and self-contained there wasn't "elite hax0rs" breaking into your box; the most there was were floppy-copied viruses.


> a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did.

This is undoubtedly true. No doubt there are countless quietly-malfunctioning embedded systems all around the world.

There also exist highly visible embedded systems such as on-air telephone systems used by high-profile talents in major radio markets around the country. In that environment malfunctions rarely go unnoticed. We'd hear about them literally the day of discovery. It's not that there were zero bugs back then, just nothing remotely like the jira-backlog-filling quantities of bugs that seem to be the norm today.


This was what passed for an "AAA" game in 1980

https://en.wikipedia.org/wiki/Ultima_I:_The_First_Age_of_Dar...

it was coded up in about a year by two people who threw in just about every idea they got.


1980 wasn't about 30 years ago, though. 30 years ago is 1992 which is Wolfenstein 3D, Civilization, and Final Fantasy type games. It's on the cusp of games like Warcraft, C&C, Ultima Online, Quake, Diablo, and Everquest. Games that are, more or less, like what we have now but with much much worse graphics.


In 1992 (and pretty much in a good part of the 90s) it was still possible and practical to build a small dev team (under 10) and push out incredible titles. IIRC Both ID Software and the team that worked on Diablo were relatively small-ish.

Nowadays we have to look to indie studios.


You're not going to be believe this, but I'm still working on Ultima IV. I come back to it every 3-4 years and spin up a new player and start over. I love it, but never can seem to commit enough time to build up all my virtues.


It was also very unforgiving - one mistake and you were back to rebuilding that virtue.


I recall that on one of my play-throughs as a kid, I got everything done except my characters needed to be level 8. So I un-virtuously tweaked the save file. (I think that was Ultima IV, but it's been a while.)

I also tweaked a later Ultima to not need the floppy disk in the drive. The budget copy protection had added itself to the executable and stored the real start address in a bad sector on disk, so I just patched the real start address back into the EXE header.


I read somewhere that Ultima V is the last Ultima that Lord British did the majority programming work. For Ultima VI he was convinced that he need a full team to get it done.

I still think it should be rather doable (and should be done by any aspiring game programmers) for a one man team to complete a Ultima V spin-off (same graphics, same complexity but on modern platforms) nowadays. Modern computers, languages and game engines abstract away a lot of the difficulties.


Completely agreed. The tile based graphics that pushed the limits of a mid-80's computer (and in some cases required special hardware) can now be done off the cuff with totally naive code in a matter of a couple hours:

https://github.com/mschaef/waka-waka-land

There's lots more room these days for developing the story, etc. if that's the level of production values you wish to achieve.


I actually got turned down by Chuckles for my first 'real' programming job ... They wanted someone that actually played the games :-) It was a neat experience interviewing though - I was really surprised they had offices in New Hampshire.


This guy? Chuck "Chuckles" Beuche?

they interview him on the "Apple Time Warp" series. He wrote "Caverns of Calisito" at origin. The optimizations they came up with to make things work was crazy. (I think they were drawing every third line to speed things up and had self modifying code).

https://appletimewarp.libsyn.com/

or youtube

https://www.youtube.com/channel/UC0o94loqgK3CMz7VEkDiIgA/vid...

good resource on Apple programing 40ish years ago.


I grew up in Manchester, it was years after I left that I realized Origin games had a development office right by the airport.


And now we have AAA games that take 6+ years to make and still ship unfinished and broken. What a weird time we live in.


Ultima I was a phenomenal game, but also incredibly hard by today's standards


Throwing up another Ultima memory...

I've got great memories of Ultima II on C64 and some Apple II as well. It was far more expansive than I, but still relatively fast. I remember when III came out, it was just... comparatively slow as molasses. It was more involved, but to the point where it became a multi day event, and it was too easy to lose interest waiting for things to load. II was a great combination of size/breadth/complexity and speed.

Then I got Bard's Tale... :)


speaking of Ultima www.uooutlands.com/

Been playing this recently. It is Ultima Online but with all the right choices rather than all the wrong choices post 1999 the development studio made. Anyone who enjoys Online RPGs should certainly give it a try. The graphics don't do the game justice at all but the quality of the game makes up for this and then some.


This is why I drifted towards game development for most of my career. Consoles, until the penultimate (antipenultimate?) generation, ran software bare or nearly bare on the host machine.

I also spent time in integrate display controller development and such; it was all very similar.

Nowadays it feels like everything rides on top some ugly and opaque stack.


For context of what this guy is saying: the modern Xbox's (after 360) are actually running VMs for each game. This is part of why despite the hardware being technically (marginally) superior: Xbox tends to have lower graphical fidelity.


The 360 has a Type 1 (bare metal) hypervisor. So there's not much, if any, performance impact to having it since the software runs natively on the hardware.

Microsoft used a hypervisor primarily for security. They wanted to ensure that only signed code could be executed and wanted to prevent an exploit in a game from allowing the execution of unsigned code with kernel privileges on the system itself.

Every ounce of performance lost to the hypervisor is money Microsoft wasted in hardware costs. So they had an incentive to make the HyperV as performant as possible.


the 360 has no hypervisor.

The CPU had an emulator that you could run on x86 Windows, but it was not itself a hypervisor.

The hypervisor in the XB1 served a more important purpose: to provide developers a way of shipping the custom SDK to clients, and not forcing them to update it. This was quite important for software stability and in fact we made a few patches to MS's XB1 SDK (Durango) to optimise it for our games.

VM's are VM's, there are performance trade-offs.

I know this because I worked on AAA games before in this area, do you also work in games and are repeating something you think. you heard?


I don't work in the industry. But just because you "worked on AA games" before doesn't make you correct.

This detailed architectural overview of the 360 discusses the hypervisor:

https://www.copetti.org/writings/consoles/xbox-360/

This YouTuber, who is an industry vet, and has done several xbox ports claims the XB360 has a hypervisor:

https://www.youtube.com/watch?v=Vq1lxeg_gNs

And there entries in the CVE database for the XB360 which describe the ability to run code in "hypervisor mode":

https://www.cvedetails.com/cve/CVE-2007-1221/

This detailed article on the above exploit goes into detail on how the memory model works on the XB360, including how main memory addressing works differently in hypervisor mode than in real mode:

https://www.360-hq.com/article1435.html

That's a whole lot of really smart people discussing a topic that you claim doesn't exist.


Appreciate the detailed reply!

> This detailed architectural overview of the 360 discusses the hypervisor:

> https://www.copetti.org/writings/consoles/xbox-360/

Yes, the 128KB of key storage and W^X. That's not a hypervisor in the sense that the XB1/HyperV or VMWare have a hypervisor, they shouldn't even share a name it's not the same thing at all.

It's like calling the JVM is a virtual machine in the same way QEMU is.

The 360 "Hypervisor" is more akin to a software T2 chip than anything that actually virtualises.


I don’t think you are showing respect when you simplistically repeat your assertion without effort, after two people expended their precious time to tell you in detail that you are wrong with examples. I don’t know anything, but a few minutes following the provided links and I find https://cxsecurity.com/issue/WLB-2007030065 which says:

  The Xbox 360 security system is designed around a hypervisor concept. All games and other applications, which must be cryptographically signed with Microsoft's private key, run in non-privileged mode, while only a small hypervisor runs in privileged ("hypervisor") mode. The hypervisor controls access to memory and provides encryption and decryption services.

  The policy implemented in the hypervisor forces all executable code to be read-only and encrypted. Therefore, unprivileged code cannot change executable code. A physical memory attack could modify code; however, code memory is encrypted with a unique per-session key, making meaningful modification of code memory in a broadly distributable fashion difficult. In addition, the stack and heap are always marked as non-executable, and therefore data loaded there can never be jumped to by unpriviledged code.

  Unprivileged code interacts with the hypervisor via the "sc" ("syscall") instruction, which causes the machine to enter hypervisor mode.
You can argue your own definition of what a hypervisor is, but I suspect you won’t get any respect for doing so.


360 indeed uses hypervisor [0], but uses it only for security, to make the app signature verification run on a higher level.

Windows on PCs also runs under hypervisor if you enable some security features (e.g. VBS/HVCI which are on by default since Windows 11 2022 update, or Windows Sandbox, or WDAG) or enable Hyper-V itself (e.g. to use WSL2/Docker).

The performance losses are indeed there, but by purely running the hypervisor you lose just around 1% [1], because the only overhead is added latency due to accessing memory through SLAT and accessing devices through IOMMU...

I'd imagine XB1 is running with all the security stuff enabled though, which demands additional performance losses [2]..

[0]: https://www.engadget.com/2005-11-29-the-hypervisor-and-its-i...

[1]: https://linustechtips.com/topic/1022616-the-real-world-impac...

[2]: https://www.tomshardware.com/news/windows-11-gaming-benchmar...


There’s no reason a pass through GPU configuration in a vm would have lower graphical fidelity.


There is a reason but it would only harm particularly poorly written games and even then a single digit percentage.

To excercise that you need a lot of separate memory transfers. Tiny ones. Games tend to run bulky transfers instead of many megabytes.

Memory bandwidth and command pipe should not see an effect even with the minimally increased latency, on any HVM.


Again with the caveat that this is specific to dom0 virtualization taking advantage of full hardware acceleration VT-d/VT-x, etc, what you say isn’t even necessarily the case.

With modern virtualization tech, the hypervisor mainly sets things up then steps out of the way. It doesn’t have to involve itself at all in the servicing of memory requests (or mapped memory requests) because the cpu does the mapping and knows what accesses are allowed or aren’t. The overhead you’re talking about is basically traversing one extra level in a page table noticeable only on micro benchmarks when filling the tlb or similar - this is a change in performance you might encounter a micro-regression in (without any virtualization to speak of) even when going from one generation of cpu architecture to the next.

Theoretically, the only time you’ll have any overhead is on faults (and even then, not all of them).

Of course I guess you could design a game to fault on every memory request or whatever, but that would be a very intentionally contrived scenario (vs just plain “bad” code).


Hello ComputerGuru,

As you may understand: there's more to graphical fidelity than just the GPU itself.

CPU<->GPU bandwidth (and GPU memory bandwidth) are also important.

There is a small but not insignificant overhead to these things with virtualisation: VMs don't come for free.


"Pass through GPU configuration" means that GPU memory is mapped directly into guest address space in hardware.

Bandwidth from a VM partition should be identical to that from the root partition.


I don’t understand what you’re trying to imply here.

Are you seriously suggesting that I chose to downgrade the graphics on the XB1 because I felt like it, and that dozens of other AAA game studios did the same thing?

Our engine was Microsoft native, by all rights it should have performed much better than PS4.

If you’re going to argue you’ll have to do a lot better than that since I have many years of lived experience with these platforms.


OK, you have a technical disagreement. No need to take it personally.

You may be right - you probably have more experience with this particular thing than I do.

I can't answer for the performance of the XB1, but I am curious what % reduction in GPU memory bandwidth you observed due to virtualization.

Did you have a non-virtualized environment on the same hardware to use for comparison?


I didn't take it personally, i just think you're presenting ignorance as fact and it's frustrating.

Especially when seemingly it comes from nowhere and people keep echoing the same thing which I know not to be true.

Look, I know people really love virtualisation (I love it too) but it comes with trade-offs; spreading misinformation only serves to misinform people for.. what, exactly?

I understood the parents perspective, GPU passthrough (IE; VT-d & AMD-Vi) does pass PCI-e lanes from the CPU to the VM at essentially the same performance. My comment was directly stating that graphical fidelity does not solely depend on the GPU, there are other components at play, such as textures being sent to the GPU driver, those textures don't just appear out of thin air, they're taken from disk by the CPU, and passed to the GPU. (there's more to it, but usually I/O involves the CPU on older generations)

The problem with VMs is that normal memory access's take on average a 5% hit, I/O takes the heaviest hit at about 15% for disks access and about 8% for network throughput (ballpark numbers but in-line with publicly available information).

It doesn't even matter what the exact precise numbers are, it should be telling to some degree that PS4 was native and XB1 was virtualised, and the XB1 performed worse with a more optimised and gamedev friendly API (Durango speaks DX11) and with better hardware.

It couldn't be more clear from the outside that the hypervisor was eating some of the performance.


I guess I should clarify that my point was purely in abstract and not specific to the XBox situation.

Of course in reality it depends on the hypervisor and the deployed configuration. Running a database under an ESXi VM with SSDs connected to a passed-through PCIe controller (under x86_64 with hardware-assisted CPU and IO virtualization enabled and correctly activated, interrupts working correctly, etc) gives me performance numbers within the statistical error margin when compared to the same configuration without ESXi in the picture.

I haven’t quantified the GPU performance similarly but others have and the performance hit (again, under different hypervisors) is definitely not what you make it out to be.

My point was that if there’s a specific performance hit, it would be pedantically incorrect to say “virtualizing the GPU is the problem” as compared to saying “the way MS virtualized GPU access caused a noticeable drop in achievable graphics.”


Sorry, I don't think I implied virtualising the GPU is the problem.

I said "the fact that it's a VM has caused performance degradation enough that graphical fidelity was diminished" - this is an important distinction.

To clarify further: the GPU and CPU is a unified package and the request pipeline is also shared, working overtime to send things to RAM will affect GPU bandwidth, so overhead of memory allocations that are non-GPU will still affect the GPU due to that limited bandwidth being used.

I never checked if the GPU bandwidth was constrained by the hypervisor to be fair, because such a thing was not possible to test, the only corrolary is the PS4 which we didn't optimise as much as we did for DX and ran on slightly less performant hardware.


I always figured texture loading from disk was mostly done speculatively and during the loading screen, but what do I know.

Anyway, a 5% memory bandwidth hit does not sound to me like a huge deal.


Lower graphical fidelity than what? PlayStation?


> Marginally superior to what? PlayStation?

Precisely

Both the original Xbox One and the Xbox One S have a custom, 1.75GHz AMD 8-core CPU, while the Xbox One X bumps that up to a 2.3GHz 8-core chip. The base PS4 CPU remained clocked at 1.6GHz and contains a similar custom AMD 8-core CPU with x86-based architecture, while the PS4 Pro bumps that clock speed up to 2.13GHz.

EDIT: you’ve edited your comment, but also yes.


The CPU isn't particularly relevant is it (although the CPUs in the PS4/XBone generation were exceptionally terrible compared to what was standard on PCs at the time)? Graphical fidelity is going to depend much more on the GPU (although the CPU is going to bottleneck framerate if it's not powerful enough).

In the current generation the Series X has a more powerful GPU than the PS5, which tends to mean a lot of games run at higher resolutions on the system, although there's some games that run slightly better on PS5 (I think the Call of Duty games might be in that category?). And a lot (most?) are basically the same across both systems - probably because devs aren't bothering to tweak cross platform games separately for the two different consoles.


>This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

This doesn't really track. 30 years ago computers were, more or less, the same as they are now. The only major addition has been graphics cards. Other than that we've swapped some peripherals. Don't really see how someone could "fully understand" the modem, video drivers, USB controllers, motherboard firmware, processor instruction sets, and the half dozen or so more things that went into a desktop.


This is why you fail. Thirty years ago I could make a wire wrapped 68000k board that did nothing but play music. CE/CS was different back then. I'd cut pins and solder in chips to twiddle the filters on audio output. You could know the entire process from power on to running of your computer and it was easy to change bits even down to the hardware level like adding a 'no place to put it unless you build it yourself' CPU/MPU/RAM upgrade and make it work. Adjust your NTSC video output, just cut that resistor in lieu of replacing it with something really high resistance, it'll be better. Let's build our own new high speed serial port for MIDI. How about a graphics co-processor that only does Mandlebrot calculations, let's build three of them. Only few of the younger generation comprehend the old ways. And the machines have changed to fewer chips and machines have turned into system on a chip. It's a bit of a shame.


Where did one acquire your kind of knowledge outside of a university? Were there any books or USENET groups that you visited to attain it?


You would build a wire wrapped 68000 board in 1992? Isn’t that a tiny bit late to expend that much effort on a 68000?


Not at all. I was still building embedded hardware around 68k 10 years later. There are undoubtedly new products being built around 68k today.

If all you want to do is synthesize music the 68k is perfect.

If you’re taking issue with wire wrap, there just weren’t general purpose dev boards available back then. You were expected to be able to roll your own.


Wire wrap is the most reliable form of construction, used by NASA for many years for this reason - the wrapping of the wire around the square pegs creates a small cold weld at every corner.

Plus when mulitlayer boards were not really a thing, wirewrap gives you all the layers you want, more or less.


30 years ago was DOS computers - usb certainly wasn’t widespread even if it was out, and many of the video drivers at the time were “load palette here, copy memory there” type things.


As mentioned, we didn't have USB controller until 1996. But even if you included that, which was an order of magnitude more complex than parallel port, USB 1.0 spec was only about 100 pages long. And yes you could reasonably understand what was going on there.


the crazy thing to me is just how many different workflows/UI/UX you need to learn along so many platforms today. AWS, GCP, Azure - like you need to learn so much deeply about each in order to be "marketable" and the only way you shall learn all of them is if you happen to work at a company that happens to rely on said platform.

Then there is low-level training of ILO bullshit that ive done weeks training on for HPE and I have building and dealing with HPE servers since before the bought COMPAQ....

And dont even get me started on SUN and SGI... how much brain power was put into understanding those two extinct critters... fuck even CRAY.

there is so much knowledge that has to evaporate in the name of progress....


Yeah, it's definitely great but also terrible that bugs can be patched so easily now.


Just so its documented ;; may you plz ELI5 how easy a bug is to patch today? Thanks


When DOOM was released in 1993 it spread like wildfire across bulletin boards and early FTP services. But the vast majority of players got it from a floppy copy at their local computer store - it was shareware, and so copying the floppy was fine. They even encouraged stores to charge for the shareware game, they wanted it distributed as widely as possible.

And if you paid the $40 for the full game, you got floppies mailed to you.

There was no easy way for the company to let you know there was an update available (the early versions had some well-known bugs) so the user would have to go searching for it, or hear a rumor at the store. If you called id, they'd have to mail you a disk with the updated executable on it. This was all confusing, time consuming, and was only for a game.

Things were much worse with operating systems and programs.

Now almost every piece of software is either distributed via an App Store of some sort that has built-in updates, or has a "Check for updates" button in the app itself. Post an updated build and within days a huge percentage of your users will be running the latest software.


This makes me saddest in the game market with day-one patching. I'm old enough to remember bringing a game home, plugging in the cart and playing it, but if I was to do that now with a disc or cart, there is likely a download in my future. At least some of the game publishers will pre-download so I can play when the game is made available, but I miss the days (not the bugs) of instant playing once the game was acquired.


> if I was to do that now with a disc or cart, there is likely a download in my future.

The newest Call of Duty (was released this past Friday) was released on disc as well as the various digital forms. Apparently, the disc was only contained ~72mb of data for a game that easily cleans 100gb when fully installed.


I miss the days when bugs were extra content. B)


For history. We're at time where SaaS is ascendant and CI/CD tools are everywhere. This means that to patch a bug, you fix the code, commit it, and then it magically makes it way to production, within like an hour, sometimes less. Customers are interacting with your product via a web browser, so they refresh the page and receive the new version of the software. Compared to times of old, with physical media and software that needed installing, it's ridiculously easier.


grats on this reply... even though ELI5 context will have to have some domain knowledge... but this was a great response, thanks.


It's fascinating to think about a historian from the future, coming along and reading about the day to day lives of a 2020's developer and them just being enamored and asking about the most mundane details of our lives that we never considered or at all document. What colorschemes did they use in VScode? What kinds of keyboards did they use? What's a webpack?


In 2014 it was writen that Sergi Brin (goog founder) had a genetic future-forming-cancer gene and he was funding pharma around it....

so I posted this to reddi May 2 2014....

===

Fri, May 2, 2014, 4:49 PM

to me

In the year 2010, scientists perfected suspended animation through the use of cryogenics for the purpose of surgery. After more than a decade of study and refinement, long term suspended animation became a reality, yet a privilege reserved for only the most wealthy and influential.

The thinking at the time was that only those who showed a global and fundamental contribution to society (while still viewed through the ridiculously tinted lenses of the global elite of the era) were worthy of entering into such state.

The process was both incredibly complex and costly. As each Transport, as they were known, required their own stand alone facility to be built around them. Significant resources were put into the development of each facility as they required complete autonomous support systems to accommodate whatever duration was selected by the Transport.

Standalone, yet fully redundant, power, security and life support systems were essential to the longevity of each facility.

Additionally, it was recognized that monetary resources would be subject to change over time, especially fiat-currency based resources. Thus there was a need to place physical holders of value that would be perceived to not deplete/dilute over time into the facilities for use by the Transport when they resuscitate.

These resources are the most sought after treasure of the new world.

After hundreds of years of human progress, civilization could no longer sustain itself in an organized self-supporting system. Through utter corruption of what some call the human soul, the world has fallen dark. There are very few outposts of safety in the current Trial of Life, as its now known.

Many Transporters have been found, resuscitated and exploited already. There are believed to be many many more, but their locations are both secret and secure. Akin to your life relying on the discovery of an undisturbed Tomb of a Pharaoh - even though every consciousness on the planet is also seeking the same tomb.

They are the last bastion of hope for they alone have the reserves of precious materials needed to sustain life for the current generation.

Metals, technology (however outdated), medicines, seeds, weapons and minerals are all a part of each Transport 'Crop'.

One find can support a group or community for years alone based on the barter and renewable resource potentials in each Crop.

One transport, found in 2465, that of a long dead nanotech pioneer - who was purportedly responsible for much of the cybernetic medical capabilities of the 21st century, which he sought to cure his genetic predisposition for a certain disease, was so vast that the still powerful city-state in the western province of North America was able to be founded.

The resources of this individual were extraordinary, but his resuscitation, as they all are, was rather gruesome and cold.

The security systems in each Transport Facility are biometric and very complex. They can only be accessed by a living, calm and (relatively) healthy Transport.

If the system, and its control mechanism AI, detect signs of duress, stress or serious injury to the Transport - they go into fail-safe. Which is to say they self detonate. Taking with them all resources, the Transport and the Seekers as well.

There have been many instances of this, such that the art of successful Resuscitation has become an extremely profitable business.

The most active and successful Resuscitation Team (RT) have been the ironically named, Live Well Group.

The most conniving, well practiced and profitable con in the history of mankind.

LWG alone has been responsible for the resuscitation of more than 370 Transports. Their group is currently the most powerful in the world. With their own city-state, established after the Brin case mentioned, they have a cast of thousands of cons all working to ensure the Transport believes they have been Awakened to a new, advanced, safe world and that they would be allowed to stake part in a significant way now that they have been Transported.

They are fooled into releasing their resources, then brutally tortured for information about any other Transports or any other knowledge they may possess, which invariably is less than nothing.

It is a hard world out there now, and the LWGs ruthlessly strive to locate the thousands of other Transport Facilities is both the worst aspect of our modern struggle - yet ironically will serve to be the basis of the ongoing endeavor of the species.

There is rumor of a vast facility of resources and Transports in an underground 'CITY' of the most elite Transports ever. A facility supposedly comprised of the 13 most powerful and rich bloodlines of people to have ever existed.

It is not known which continent this facility is on, but I believe it is in Antarctica - fully automated and with the ability to auto-resuscitate at a given time.

This is my mission, this is my life's work. To find and own this facility and crush any and all other groups that oppose me.


Today you can use the internet to patch a bug on a users computer, and the users expect this, and even allow you to patch bugs automatically.

Previously, patching bugs meant paying money for physical media and postage.


I've been trying to teach my young teenage kids about how things work, like, washing machines, cars, etc. One of the things I've learned is that it's a looooot easier to explain 20th century technology than 21st century technology.

Let me give you an example. My father was recently repairing his furnace in his camper, which is still a 20th century technology. He traced the problem to the switch that detects whether or not air is flowing, because furnaces have a safety feature such that if the air isn't flowing, it shuts the furnace off so it doesn't catch on fire. How does this switch work? Does it electronically count revolutions on a fan? Does it have two temperature sensors and then compute whether or not air is flowing by whether their delta is coming down or staying roughly the same temperature? Is it some other magical black box with integrated circuits and sensors and complexity greater than the computer I grew up with?

No. It's really simple. It's a big metal plate that sticks out into the airflow and if the air is moving, closes a switch. Have a look: https://www.walmart.com/ip/Dometic-31094-RV-Furnace-Heater-S... You can look at that thing, and as long as you have a basic understanding of electronics, and the basic understanding of physics one gets from simply living in the real world for a few years, you can see how that works.

I'm not saying this is better than what we have now. 21st century technology exists for a reason. Sometimes it is done well, sometimes it is done poorly, sometimes it is misused and abused, it's complicated. That fan switch has some fundamental issues in its design. It's nice that they are also easy to fix, since it's so simple, but I wouldn't guarantee it's the "best" solution. All I'm saying here is that this 20th century technology is easier to understand.

My car is festooned with complicated sensors and not just one black box, but a large number of black boxes with wires hooked in doing I have no idea what. For the most part, those sensors and black boxes have made cars that drive better, last longer, are net cheaper, and generally better, despite some specific complaints we may have about them, e.g., lacking physical controls. But they are certainly harder to understand than a 20th century car.

Computers are the same way. There is a profound sense in which computers today really aren't that different than a Commodore 64, they just run much faster. There are also profound senses in which that is not true; don't overinterpret that. But ultimately these things accept inputs, turn them into numbers, add and subtract them really quickly in complicated ways, then use those numbers to make pictures so we can interpret them. But I can almost explain to my teens how that worked in the 20th century down to the electronics level. My 21st century explanation involves a lot of handwaving, and I'm pretty sure I could spend literally a full work day giving a spontaneous, off-the-cuff presentation of that classic interview question "what happens when you load a page in the web browser" as it is!


> This was it, even into the 90s you could reasonable "fully understand" what the machine was doing

That was always an illusion, only possible if you made yourself blind to the hardware side of your system.

https://news.ycombinator.com/item?id=27988103

https://news.ycombinator.com/item?id=21003535


Your habit of citing yourself with the appropriate references has led me from taking your stance as an extremely literal one (“understanding all of the layers; literally”) to actually viewing your point as…very comprehensive and respectful to the history of technology while simultaneously rendering the common trope that you are addressing as just that, a trope.

Thanks, teddyh.


Fully understand and “completely able/worth my time to fix” are not identical. I can understand how an alternator works and still throw it away when it dies rather than rebuild it.


In that case, what is the value proposition of investing the time to learn how an alternator works? It surely has some value, but is it worth the time it takes to know it?

To bring it back to our topic, is it worth it to know, on an electrical level, what your motherboard and CPU is doing? It surely has some value, but is it worth the time to learn it?


You’re just saying you didn’t go to school for science or engineering. Plenty of people program and also understand the physics of a vacuum tube or capacitor. Sometimes we really had to know, when troubleshooting an issue with timing or noise in a particular signal on a PC board or cable.


It was a mix of great and awful.

I wrote tons of assembly and C, burned EPROMs, wrote documentation (nroff, natch), visited technical bookstores every week or two to see what was new (I still miss the Computer Literacy bookstore). You got printouts from a 133 column lineprinter, just like college. Some divisions had email, corporation-wide email was not yet a thing.

No source code control (the one we had at Atari was called "Mike", or you handed your floppy disk of source code to "Rob" if "Mike" was on vacation). Networking was your serial connection to the Vax down in the machine room (it had an autodial modem, usually pegged for usenet traffic and mail).

No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming. You used Emacs if you were lucky, EDT if you weren't. The I/O system on your computer was a 5Mhz or 10Mhz bus, if you were one of those fortunate enough to have a personal hard drive. People still smoked inside buildings (ugh).

It got better. AppleTalk wasn't too bad (unless you broke the ring, in which case you were buying your group lunch that day). Laserprinters became common. Source control systems started to become usable. ANSI C and CFront happened, and we had compilers with more than 30 characters of significance in identifiers.

I've built a few nostalgia machines, old PDP-11s and such, and can't spend more than an hour or so in those old environments. I can't imagine writing code under those conditions again, we have it good today.


> No source code control

30 years ago is 1992, we certainly had source control a long time before!

In fact in 1992 Sun Teamware was introduced, so we even had distributed source control, more than a decade before "git invented it".

CVS is from 1986, RCS from 1982 and SCCS is from 1972. I used all four of those are various points in history.

> No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming.

In 1993 (or might've been early 1994) I had two large monitors on my SPARCstation, probably at 1280×1024.


That's like saying that "we" had computers in 1951.. The future is already here – it's just not evenly distributed.

Something existing is different from something being in widespread use.

When I was a kid in the 90s, I had a computer in my room that was entirely for my personal use. There was a pretty long stretch of time where most kids I encountered didn't even have access to a shared family PC.. much longer before they had a computer of their own.


Had a Kaypro back in ‘82 that I used to create a neat umpire for a board war game. It had a markup language and could run things that let me get on arpanet and run Kermit. Lots of time has passed and programs used to be way more “efficient”. And the workstations and mini supers that followed shortly had great graphics it just wasn’t a card as much as a system. SGI’s and specialized graphics hardware such as Adsge and the stuff from e&s. Lots happened before pc’s.


I'm certainly not saying that nothing happened before PCs, only that when talking about the past, one cannot say "we had X" based simply on whether X existed somewhere in the world, but one must consider also how widespread the usage of X was at the time.


There were gobs of suns and SGI’s in the 80’s on just not at home. Whole lot of Unix work was done before that on pdp11’s and VAXen. Had to dial in or stay late to hack :-).


Indeed, I'm not disputing that.

However, you still need to mind the context. For instance, there existed computers in 1952.. Saying that "we had computers in 1952" is still right, very few institutions had access to one. Most people learning to program a computer in 1952 wouldn't actually have regular access to one, they'd do it on paper and SOME of the their programs may actually get shipped off to be actually run. So it'd even be entirely unreasonable to say "We, the people learning to program in 1952, had computers", one may say "We, who were learning to program in 1952 had occational opportunity to have our programs run on a computer"..

Yes, there were lots of nice hardware in the 80s, and LOTS of people working professionally in the field would be using something cheaper and/or older. In context of my original post, I took issue with op writing that "we had version control", sure, version control exited, but it was not so widely used throughout the industry that it's reasonable to say that we had it, some lucky few did.


The topic of the thread was software developer experience as a professional career, not at home.

Sure, in the early 90s I didn't have muti-CPU multi-monitor workstations at home, that was a $20K+ setup at work.

But for work at work, that was very common.


Maybe GP was in a less developed or wealthy area than you.

Often when talking to Americans about the 90s they're surprised, partly because tech was available here later and partly because my family just didn't have enough money.


Dude is literally talking about Atari. It’s surprising they didn’t have better source control by 1992; Apple certainly had centralized source control and source databases by that point. But Atari was basically out of gas by then.


>(I still miss the Computer Literacy bookstore)

I used to drive over Highway 17 from Santa Cruz just to visit the Computer Literacy store on N. First Street, near the San Jose airport. (The one on the Apple campus in Cupertino was good, too.)

Now, all of them—CL, Stacy's Books, Digital Guru—gone. Thanks, everyone who browsed in stores, then bought on Amazon to save a few bucks.


Won’t defend Amazon generally, but you need to blame private equity and skyrocketing real estate prices for the end of brick and mortar bookstores. And, for completeness, if we’re saying goodbye to great Silicon Valley bookstores of yore: A Clean Well Lighted Place For Books in Cupertino and the BookBuyers in Mountain View were also around 30 years ago, although they were more general. And strip mall ones like Waldenbooks still existed too.


Agree with the poster. Much better IMHO and more enjoyable back then.

Because of the software distribution model then there was a real effort to produce a quality product. These days not so much. Users are more like beta testers now. Apps get deployed with a keyboard input. The constant UI changes for apps (Zoom comes to mind) are difficult for users to keep up with.

The complexity is way way higher today. It wasn't difficult to have a complete handle on the entire system back then.

Software developers where valued more highly. The machines lacked speed and resources - it took more skill/effort to get performance from them. Not so much of an issue today.

Still a good job but I would like seek something different if I was starting out today.


> Still a good job but I would like seek something different if I was starting out today

I'm only 6 years in, and I am starting to feel this.

I went into computer science because it's something I knew that, at some level, it was something I always wanted to do. I've always been fascinated with technology ever since I was a child -- how things work, why things work, etc..

While studying computer science at my average state school, I met a few others that were a lot like me. We'd always talk about this cool new technology, work on things together, etc.. The was a real passion for the craft in a sense. It's something I felt similar during my time studying music with my peers.

Perhaps, in some naive way, I thought the work world would be a lot like that too. And of course, this is only my experiences so far, but I have found my peers to be significantly different.

People I work with do not seem to care about technology, programing, etc.. They care about dollar signs, promotions, and getting things done as quickly as possible (faster != better quality). Sure, those three things are important to varying degrees, but it's not why I chose computer science, and I struggle to connect with those people. I've basically lost my passion for programing because of it (though that is not the entire reason -- burnout and whatnot has contributed significantly.)

I'm by no means a savant nor would I even consider myself that talented, but I used to have a passion for programming and that made all the "trips" and "falls" while learning worth it in the end.

I tell people I feel like I deeply studied many of the ins and outs photography only to take school pictures all day.


Don't get me wrong there are still incredible opportunities out there. IoT is starting to pick up steam. Individuals that really like knowing what the metal is doing have many green fields to settle. You can get prototype boards designed and delivered for prices that an individual can afford. That was not possible 30 years ago. If you can find areas that cross disciplines things get more interesting.

WebStuff is dead IMHO. It is primarily advertising and eyeballs - yawn. If I see one more JS framework I'll puke. We have so many different programming languages it is difficult to get a team to agree on which one to use:) Don't get me started on databases. I have apps that use 3 or 4 just because the engineers like learning new things. It is a mess.


Better workplaces exist! Don't settle for one that saps your will to live.


> A sense of mastery and adventure permeated everything I did.

How much of that is a function of age? It is hard to separate that from the current environment.

Personally, I don't feel as inspired by the raw elements of computing like I once did, but it is probably more about me wanting a new domain to explore than something systemic. Or at least, it is healthier to believe that.

> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

The notion of Internet Time, where you're continuously shipping, has certainly changed how we view the development process. I'd argue it is mostly harmful, even.

> perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

I think this is the crux of it: more responsibility, more ownership, fewer software commoditization forces (frameworks), less emphasis on putting as many devs on one project as possible because all the incentives tilt toward more headcount.


Yes indeed could be Dunning Kruger effect.


There wasn’t HN so no distraction to digress to every now and them.

I second this - systems were small and most people could wrap their brains around them. Constant pressure existed and there wasn’t “google” & “so” & other blogs to search for solutions. You had to discover by yourself. Language and API manuals weighed quite a bit. Just moving them around the office was somewhat decent exercise.

There wasn’t as much build vs buy discussion. If it was simple enough you just built it. I spent my days & evenings coding and my nights partying. WFH didn’t exist so, if you were on-call you were at work. When you were done you went home.

My experience from 25 years ago.


I actually used to do 'on call' by having a vt100 at the head of my bed and I would roll over every couple hours and check on things over a 9600 baud encrypted modem that cost several thousand dollars.

the only time I ever had to get up in the middle of the night and walk to the lab was the Morris worm. I remember being so grateful that someone brought me coffee at 7


I have one word for you "Usenet".


That was there. However, where I started 25 years back, they reserved Internet access for only privileged senior and staff engineers. I was a lowly code worm, No Internet, no usenet.


Alot of our modern software practices have introduced layers of complexity on to systems that are very simple at a fundamental level. When you peel back the buzzword technologies you will find text streams, databases, and REST at the bottom layer.

It's a self fulfilling cycle. Increased complexity reduces reliability and requires more headcount. Increasing headcount advances careers. More headcount and lower reliability justifies the investment in more layers of complicated technologies to 'solve' the 'legacy tech' problems.


> A sense of mastery and adventure permeated everything I did.

My experience too. I did embedded systeems that I wrote the whole software stack for: OS, networking, device drivers, application software, etc.

> Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything.

These days programming is more trying to understand the badly-written documentation of the libraries you're using.


I'm younger than you, but one of my hobbies is messing around with old video game systems and arcade hardware.

You're absolutely right - there's something almost magical in the elegant simplicity of those old computing systems.


> A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Are you me? ;) I feel like this all the time now. I also started in embedded dev around '86.

> Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).

I wouldn't want to give up git and various testing frameworks. Also modern IDEs like VSCode are pretty nice and I'd be hesitant to give those up (VSCode being able to ssh into a remote embedded system and edit & debug code there is really helpful, for example).


And it had it's downside too. - Developing on DOS with non-networked machines. (OK,l one job was on a PDP-11/23) - Subversion (IIRC) for version control via floppy - barely manageable for a two person team. - No Internet. Want to research something? Buy a book. - Did we have free S/W? Not like today. Want to learn C/C++? Buy a compiler. I wanted to learn C++ and wound up buying OS/2 because it was bundled with IBM's C++ compiler. Cost a bit less than $300 at the time. The alternative was to spend over $500 for the C++ compiler that SCO sold for their UNIX variant. - Want to buy a computer? My first was $1300. That got me a Heathkit H-8 (8080 with 64 KB RAM) and an H19 (serial terminal that could do up to 19.2 Kbaud) and a floppy disk drive that could hold (IIRC) 92KB data. It was reduced/on sale and included a Fortran compiler and macro-assembler. Woo! The systems we produced were simpler, to be sure, but so were the tools. (Embedded systems here too.)


Yeah, I am almost identical, lots of 6805, floating point routines and bit banging RS232, all in much less than 2k code memory, making functional products.

Things like basketball scorebaords, or tractor spray controllers to make uniform application of herbicide regardless of speed. Made in a small suburben factory in batches of a hundred or so, by half a dozen to a dozen "unksilled" young ladies, who were actally quite skilled.

No internet, the odd book and magazines, rest of it, work it out yourself.

In those days it was still acceptable, if not mandatory to use whatever trick you could come up with to save some memory.

It didn't matter about the direct readability, though we always took great pains in the comments for the non obvious, including non specified addressing modes and the like.

This was around the time the very first blue LEDS came out.

When the web came along, and all the frameworks etc, it just never felt right to be relying on arbitrary code someone else wrote and you did not know the pedigree of.

Or had at least paid for so that you had someone to hassle if it was not doing what you expected and had some sort of warranty.

But also a lot of closed source and libraries you paid for if you wanted to rely on someone elses code and needed to save time or do something special, an awful lot compared to today.

Microsoft C was something like $3000 (maybe $5k, cant rememeber exactly) dollars from memory, at a time when that would buy a decent second hand car and a young engineer might be getting 20-25k a year tops(AUD).

Turbo C was a total breakthru, and 286 was the PC of choice, with 20MB hard drive, with the Compaq 386-20 just around the corner.

Still, I wouldn't go back when I look at my current 11th Gen Intel CPU with 32Gig RAM, 2 x 1TB SSDs and a 1080Ti graphics card with multiple 55inch 4k monitors, not even dreamable at the time.


don't forget the community. it was very much the case that you could look at an IETF draft or random academic paper and mail the authors and they would almost certainly be tickled that someone cared, consider your input, and write you back.

just imagine an internet pre-immigration-lawyer where the only mail you ever got was from authentic individuals, and there were no advertisements anywhere.

the only thing that was strictly worse was that machines were really expensive. it wasn't at all common to be self-funded


> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

> Schedules were no less stringent than today;

So … how did that work, then? I know things aren't done, and almost certainly have bugs, but it's that stringent schedule and the ever-present PM attitude of "is it hobbling along? Good enough, push it, next task" never connecting the dots to "why is prod always on fire?" that causes there to be a never ending stream of bugs.


With no pms you dealt directly with the boss and you managed your own tasks so you had a hard deadline and showed demos and once it was done support/training. It was waterfall so not finishing on time meant removing features or finishing early meant added additional features if you had time . Everything was prod. You needed to fix showstopper bugs/crashed but bugs could be harmless (spelling fr example) or situational and complex or show shoppers. You lived with them because bugs were part of the OS or programming language or memory driver experience at the time.


As my old boss once said (about 30 years ago actually!) when complaining about some product or the other "this happens because somewhere, an engineer said, 'fuck it, it's good enough to ship'."


I wonder how much of this is due to getting old vs actual complexity.

When I started I was literally memorising the language of the day and I definitely mastered it. Code was flowing on the screen without interruption.

Nowadays I just get stuff done; I know the concepts are similar, I just need to find the specifics and I'm off to implement. It's more akin to a broken faucet and it definitely affects my perception of modern development.


Thanks. I'd forgotten how much the 68705 twisted my mind.

And how much I love the 68HC11 - especially the 68HC811E2FN, gotta get those extra pins and storage! I never have seen the G or K (?) variant IRL (16K/24K EPROM respectively and 1MB address space on the latter). Between the 68HC11 and the 65C816, gads I love all the addressing modes.

Being able to bum the code using zero-page or indirectly indexed or indexed indirectly... Slightly more fun than nethack.


https://en.wikipedia.org/wiki/Rosy_retrospection

I am sure everything was great back then but. I've been coding for 20 years, and there are a lot of problems of different types (including recurring bugs) that have been solved with better tooling, frameworks and tech overall. I don't miss too much


Exactly my experience coming out of school in 1986. Only for me it was microcontrollers (Intel 8096 family).

Thanks for bringing back some great memories!


I miss everything being a 'new challenge'... Outside of accounting systems - pretty much everything was new ground, greenfield, and usually - fairly interesting :-)


I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.

The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.

I worked on an OpenGL desktop application for geoscience data visualization running on Irix and Solaris workstations.

The work life balance was great as the hardware limitations prevented any work from home. Once out of the office I was able to go back to my family and my hobbies.

Processes were much lighter with far less security paranoia as cyber attacks weren't a thing. Biggest IT risk was someone installing a virus on a computer from the disk they brought to install Doom shareware.

The small company I worked for did not have the army of product managers, project managers or any similar buffoonery. The geologists told us developers what they needed, we built it and asked if they liked the UI. If they didn't we'd tweak it and run it by them again until they liked it.

In terms of software design, OO and Gang of Four Patterns ruled the day. Everyone had that book on their desks to accompany their copies of Effective C++ and More Effective C++. We took the GoF a little too seriously.

Compensation was worse for me though some of that is a function of my being much more advanced in my career. These days I make about 10x what I made then (not adjusted for inflation). That said, I led a happier life then. Not without anxiety to which I'm very prone but happier.


Effective C++ was an amazing book. I bought copies for the entire team out of my own pocket. The Gang of Four on the other hand was an unfortunate turn for the industry. As you say we took it too seriously. In practice very few projects can benefit from the "Factory pattern", but I've seen it used in way too many projects to the detriment of readability. I worked in one source code base where you had to invoke 4 different factories spread across many different source files just to allocate one object.


> As you say we took it too seriously.

The real problem is that many people didn't actually read the book or, if they did, they only took part of it seriously.

Each pattern chapter has a pretty long section that details when you should and should not use the pattern. The authors are very clear about understanding the context and not mis-applying patterns.

But once it became popular (which happened because these patterns are quite useful), it got cargo culted and people started over-applying them because it sent a social signal that, "Hey, I must be a good developer because I know all these patterns."

The software engineering world is a much better one today because of that book now that the pendulum has swung back some from the overshoot.


It's amazing how many times I saw the Singleton pattern between 2000 - 2012 or so, and in almost every case, it degenerated into a global variable that was used by everything in a component or system.

It would have been more apt to name it the Simpleton pattern, after most of their practitioners.

This stuff started to go away w/ modern DI frameworks. In fact, I don't really see much of the GoF patterns anymore, particularly ones for managing the order of instantiation of objects. Everything in the C# world has been abstracted/libraried/APIed away. But I wouldn't be surprised if GoF patterns are still prevalent in C/C++/SmallTalk code.


I am a newly employed engineer and I am assigned to learn Design patterns (the book and all) right as of today. Needless to day, I am very intrigued. Could you expand what you mean by too far, beyond over-applying the patterns?


Don't listen too closely to the issues people have with GoF and Design Patterns. Yes, they have their issues and an over-reliance is an issue. However as a junior engineer you should learn these things and come to this realization (or not) yourself! Also if your company does use these things, code standardization across the org is more important than the downfalls of overly complex patterns.

I read these arguments instead of focusing on learning patterns and it just caused grief until I decided to learn the patterns.


> However as a junior engineer you should learn these things and come to this realization (or not) yourself!

This is such great advice. Wonder if theres even something in there about how learning through experience why something doesn't work being as useful as learning it in the first place.

I do think we throw the baby out with the bath water with many ideas. Despite us learning useful skills subconsciously.


Thank you for the response.

Yes I intend to learn and integrate design patterns and OO programming in general so that I can gain some confidence, and maybe later I can finally understand why my software development professor teachers hated this so much and teached us Haskell and Clojure instead :-)


If you're anything like me, given enough time you wonder if all the 'fluff' of OO is necessary as you seem to be writing code to satisfy the programming style rather than the domain problem. You'll then try FP and find it has its own pitfalls - especially around how complex the code can get if you have a load of smart Haskell engineers - and suddenly they have the same problems ('fluff'). Apparently at some point I'll have a similar move to Lisp and discover how complex the code base can be with multiple engineers (I'm 8 or 9 years into this engineering journey myself!).

My current goal with software is to write this as simply as possible that a junior developer with 6 months experience could read my code and know how to modify it.


Agree with the GP. In a sense even if design patterns are not such a great idea, there's so much code written with those patterns in mind (and classes/variables named accordingly) that it's beneficial to understand at least briefly what the names mean.

(That said, quoting Wikipedia, which I agree with also: "A primary criticism of Design Patterns is that its patterns are simply workarounds for missing features in C++". In particular, these days with more modern languages [and also the modernization of C++] some of the workarounds aren't that important any more)

As for why your professors prefer Haskell and Clojure... for some reason functional programming aligns with the way the stereotypical academia type person thinks. In practice, you should be using the best tool for the task, and learning various aspects of software engineering (as opposed to taking a side) should help you in the long run.


What often happens is you never get to "code standardization across the org" because technology changes too fast. But you still have to deal with overly complex patterns, varying and mis-applied patterns, etc.


Oh I 100% agree but as a junior engineer you're not going to be able to change that, if you can change it you probably won't change it for the better, and using HN comments to fuel debates over long-standing patterns will just cause resentment. These are totally valid opinions to have once you find them out for yourself, IMO.


Design patterns do a good job of capturing some design decisions you'll need to make in your career. They represent a level of architectural knowledge that is often poorly captured and communicated in our industry (slightly above language features, and below whole frameworks). Many people treated the book (either in good faith over-enthusiasm, or as a bad faith strawman) as a repository of 'good' code to cut and paste. Some of those people would end up working in a functional language and claiming they don't need design patterns because they're the same as having first class functions. This is just mistaking the implementation for the design motivation. And even then, tough luck, however clever your language. Monads? A design pattern.

So, I will stress again: design patterns represent decisions you can make about your code, in a particular context. If you want to be able to supply or accept a different algorithm to make a decision, that's the strategy pattern. Maybe it's an object with an interface, maybe it's a function callback. The important bit is you decided to let users of your API supply their own policy to make a decision (instead of you just asking for a massive dictionary called 'options' or whatever). If you want to ensure that all calls to one subsystem happen to a single instance, that's the singleton pattern. Whether you enforce that with static calls or in your inversion of control container, you're still making the decision to instantiate it once and not have every callee set up its own version taking up its own resources (or maybe you are, it's a decision after all).

I get somewhat agitated about this stuff, because people are raised to be very skeptical of design patterns. This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity. That's a huge waste! But the saddest thing about the whole conversation was that the decision _not_ to use a particular design pattern is just as valid as the one to use it, and _even then_ it's still a superior approach because you can be explicit about what you're not doing in your code and why.

Anyway, good luck in your career!


> working in a functional language and claiming they don't need design patterns because they're the same as having first class functions

The point really was that you need different design patterns in a functional language, and most of the GoF design patterns are useless in a functional language, as they either deal with state, or they deal with something that had some better solution in a functional language (e.g. through algebraic datatypes, which were built-in).

So if you amend "we don't need design patterns" to "we don't need most of the GoF design patterns", it's actually a true statement.

> Monads? A design pattern.

Exactly.

And now the pendulum has swung back, and instead of providing primitive language features that would make using the Monad design patterns easy, we have half-assed async/await implementations in lots of imperative languages, just because people didn't realize async/await is just a particular use of the Monad design pattern.

> This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity.

Which is really sad, because the GoF book really emphasized this point.

But for some reason programmers seem to have a desire to turn everything into some kind of cult...


The patterns movement, for more context, arose out of the work of the architect Christopher Alexander, who explored the concept of patterns in buildings like “courtyards” or “bay windows”. The problem with the GoF ones, as they’ve been applied to software, is an overemphasis on applying them for their own sake rather than fitting the problem at hand - imagine if every window in a house was a bay window. There are a lot of software projects that end up like that and turn people off OOP in general.


Absolutely learn the patterns. You will encounter places to use them. The book isn't an instruction book on how to write software. It is a "here are some useful patterns the occur occasionally during development". Having a common language to talk about such things is useful.

It is very easy to over-apply patterns at the cost of readability and maintainability. Realize the your code is more likely to be rewritten before the of the features provided by your applications of patterns are used.


Also good to know these patterns if you're dealing with a lot of legacy code, particularly 199x - 201x code that implemented a lot of these patterns.

Some of them are straightforward (factory, adapter). Some of them are almost never used (flyweight). Some are more particular to a programming language; you might see a lot of visitor patterns in C++ code, for instance, but IIRC, that wouldn't come up in Smalltalk because it supported paradigms like double dispatch out of the box.


I will comment: when they came out the idea of patterns captivated a lot of us and it became an article of faith that all code would be designed around a particular pattern.

It's still a great idea to use patterns... but I think people have come to realise that sometimes they over complicate things and maybe they don't always fit the task at hand. If that's what you are finding then maybe don't use a pattern and just code something up.

They are a useful and powerful tool, but not a panacea.


It's that. Over applying patterns in the areas of the code that are unlikely to need the extensibility afforded by employing these patterns. The cost of using the patterns is that they add a level of indirection which later costs you and others some extra cognitive load. By and large though the GoF patterns are relevant today and when applied judiciously they do help to organize your code.


Have these OO-patterns become less relevant or is it just that they were absolutely standard 20-30 years ago- so that they are old and less relevant only in the perspective of their previous dominance?


A lot of is is that modern frameworks include a lot of those behaviors that required you to manually code the patterns back them. e.g., I can create a an ObservableCollection in C# with a single line of code, but in 1996 C++ I'd have to go to the trouble of building out an Observer pattern and it still wouldn't have all the features that IObservable does.


HeyLaughingBoy is right about patterns being built into frameworks we use today (can’t reply to that comment because it’s nested too deeply).

Rails is an example. I’ve seen a number of talks and articles by DHH that emphasize patterns and talking with people who wrote patterns. Rails built those in (like “model view controller”).

Libraries and frameworks weren’t publicly available 30 years ago. Certainly not for free. The patterns are still useful, it’s just that a library or framework is often more efficient than reimplementing a pattern from scratch.


> In practice very few projects can benefit from the "Factory pattern"

The factory pattern in C#:

    public IMeasuringDevice CreateMeasuringDevice(Func<IUnitConverter> unitConverterFactory)
In TypeScript:

    function createMeasuringDevice(unitConverterFactory: () => UnitConverter): MeasuringDevice
Very few projects can benefit from this!?


Would you say the GoF are more descriptive than prescriptive?

That is, not “do these to write good code” but “well written code looks like this”


It is meta-prescriptive.

It doesn't say, "Do this to make your code better." It says, "If you have this specific problem under these constraints, then this can help."


definitely yes, I'd even go as far as to say they're just exemplary, meaning "well written code can look (for example) like this", but since they became "industry standards", they help your code be understood (either by other people, or by you when you eventually forget how/why you wrote it that way), which helps speed up code review and make it easier to maintain / refactor old code...


> The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.

Early gig I had in 97 was working on building an internal corp intranet for a prototyping shop. There were around 50-60 folks there - probably 20 "upstairs" - doing the office/business work. I was upstairs. I was instructed to build this in Front Page. Didn't want to (was already doing some decent PHP on the side) but... hey... the IT guy knew best.

Asked for some books on FP. Nope - denied. So I spent time surfing through a lot of MS docs (they had a moderate amount online docs for FP, seemingly) and a lot of newsgroups. I was pulled aside after a while saying I was using too much bandwidth. The entire building had - as you had - a double ISDN line - a whopping 128k shared between 20+ people. I was using 'too much' and this was deemed 'wrong'. I pointed out that they decided on the tool, which wasn't a great fit for the task, and then refused to provide any support (books/etc). I left soon after. They were looking for a way to get me out - I think they realized an intranet wasn't really something they could pull off (certainly not in FP) but didn't want to 'fire' me specifically, as that wasn't a good look. Was there all of... 3 months IIRC. Felt like an eternity.

Working in software in the 90s - a bookstore with good tech books became invaluable, as well as newsgroups. No google, no stackoverflow, often very slow internet, or... none sometimes.


For some things, it was possible to substitute grit, experimentation, practice and ultimately mastery.

But for others, especially being closer to hardware, a good book was necessary. These days, it still might be.


> substitute grit, experimentation

And.. there were far fewer distractions. Waiting for compiling? Maybe you could play solitaire, but there were fewer 'rabbit holes' to go down because... you likely weren't on a network. Even by mid 90s, you weren't easily 2 seconds away from any distraction you wanted, even if you were 'online'.


Similarly I started in 2000, just as internet applications were starting to become a thing. I could list a whole load of defunct tech stacks: WebSphere, iPlanet, NSAPI, Zeus Web Server (where I worked for about a year), Apache mod_perl, Delphi etc. And the undead tech stack: MFC.

Compensation: well, this is the UK so it's never been anywhere near US levels, but it was certainly competitive with other white-collar jobs, and there was huge spike briefly around 2001 until the "dotcom boom" burst and a whole load of us were laid off.

Tooling: well, in the late 90s I got a copy of Visual Studio. I still have Visual Studio open today. It's still a slow but effective monolith.

The big difference is version control: not only no git, but no svn. I did my undergraduate work in CVS, and was briefly exposed to SourceSafe (in the way that one is exposed to a toxin).

Most of the computers we used back in 2000 were less powerful than an RPi4. All available computers 30 years ago would be outclassed by a Pi, and the "supercomputers" of that day would be outclassed by a single modern GPU. This .. makes less difference than you'd expect to application interactive performance, unless you're rendering 3D worlds.

We ran a university-wide proto-social-network (vaguely similar to today's "cohost") off a Pentium with a 100MB hard disk that would be outclassed by a low-end Android phone.

Another non-obvious difference: LCD monitors weren't really a thing until about 2000 - I was the first person I knew to get one, and it made a difference to reducing the eyestrain. Even if at 800x600 14" it was a slight downgrade from the CRT I had on my desk.


I kept buying used higher-end CRTs for almost a decade because their refresh rate and resolution so greatly outstripped anything LCD that was available for sale.


My parents got an early 2002 LCD display... I never knew what I lost... by not gaming on a CRT. Low rez too... sad. All for what, space and "enviroment"?

Like, look at this shit: https://imgur.com/a/FiOf7Vw

https://youtu.be/3PdMtwQQUmo

https://www.youtube.com/watch?v=Ya3c1Ni4B_U


I went to a PC expo of some sort in NYC in 1999 because I was in town for an interview. LCDs had just come out, but every exhibit in the hall had them because they were new but also because you could ship a whole bunch of flat screens in the same weight as a decent CRT.


I was working at an internet startup in 1996. We basically built custom sites for companies.

It’s hard now to appreciate how “out there” the internet was at the time. One of the founders with a sales background would meet with CEOs to convince them they needed a website. Most of those meetings ended with a, “We think this internet web thing is a fad, but thanks for your time”.


It's interesting to consider this viewpoint 30 years later and wonder what will bring about the next age. Is it something in it's infancy being dismissing as a fad? Have we even thought of it yet?


Well, the answer has to be the "metaverse". The jury is of course out on whether it will take root the way the internet did.


I disagree.

"The Metaverse" is still fundamentally the same as other content/experience the same way 3D movies are fundamentally the same as 2D movies.

Being able to see a projection of someone in chair next to you does not really deepen or hasten the sharing of ideas in any drastic way compares to pre-internet vs post-internet communication.

If I had to guess, my suspicion is that direct brain-to-brain communication is the next epoch-definiting development.


Instant 3d printing. It is too costly and slow now but things will change.


> I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.

My first internship was in 2000, and I feel like, overall, not a lot has changed except the deck chairs. Things still change just as fast as back then.


Taking inflation and especially rocketing housing plus wlb into account maybe you actually earned more in 1997 than today for the same kind of jobs?


The thing I remember most about those days was how often I went to Barnes & Nobles. You simply couldn't find the information online at that point. I'd go and buy a coffee and sit with a stack of books on a given topic I needed to research; then after ~45 minutes I'd decide and buy one or two books, and head back to the office to get to work.


Or to Computer Literacy bookstore. Each time I attended a conference in the Bay Area I made sure to drop into their store in San Jose to spend hours poring over the multitude of recent books on all the new stiuff happening in computing. I then had to lug 3-5 heavy books back on the plane with me. Then CL opened a store near me (Tyson's Corner in northern Virginia) which I visited at least weekly. I musta spent thousands on books back then, especially from O'Reilly. The world of computing was exploding and just keeping up with it was a challenge but also a major blast.

No source on the changes afoot then in computing was more compelling than WiReD Magazine. Its first 3-5 years were simply riveting: great insightful imaginative stories and fascinating interviews with folks whose font of creative ideas seemed unstoppable and sure to change the world. Each month's issue sucked all my time until it was read cover to cover and then discussed with others ASAP. That was a great time to be young and alive.

But Wired wasn't alone. Before them, Creative Computing and Byte were also must reads. Between 1975 and maybe 1990, the computing hobbyist community was red hot with hacks of all kinds, hard and soft. No way I was going to take a job that was NOT in computing. So I did. Been there ever since.


Awesome to see CL listed here ... worked at computerliteracy.com which eventually became fatbrain.com. Good times!


Or the library! The GIF spec was published in a magazine IIRC. I wrote a GIF viewer that supported CGA, EGA, VGA, ... displays.


The library had some things, but man things were moving so fast in the late 80s early 90s that you often had to buy the books you needed directly; because by the time they appeared in the library you'd be on to something else.

The right magazines were worth their weight in gold back then, for sure.


The MSDN library CDs were indispensable for Windows developers in the 90s. Amazing resource all at your fingertips! What a time we were living in!


A pirated copy of Visual Studio 2005 started my career.

We didn't have internet at home, and I was still in school, so the Knowledge Base articles on the MSDN CDs pretty much taught me.


Oh, you just made me completely melancholic with that atmospheric description! Makes me miss these times a lot. The abundance of information is truly a blessing, but also a curse.


Yeah it was huge for me when books started to come with CD-ROM copies (c. 1997?) and I could fit more than one "book" in my laptop bag.


The O'Reilly Cookbooks were always the best.

I still have most of my dev books. I figure if I ever get a huge bookshelf they'll help fill it out, and give the kids something to talk about.


Or the “Computer Literacy” bookstore in the Silly Valley.


I miss those days. The books weren't perfect but I feel like enough quality was put into a lot of them because it was hard to errata a print run. Of course there is a lot more information out there for free nowadays but it's harder to sift through. I think the nicer thing is that eventually you'll find content that speaks to you and the way you learn.


God damn, I miss those days!


My go-to was Softpro Books in Denver. I would scan the shelves at B&N and Borders too, just in case, but Softpro had a much better selection.


I was a programmer back in the MS-DOS and early Windows days. My language of choice was Turbo Pascal. Source control consisted of daily backups to ZIP files on floppy disks. The program I wrote talked to hand-held computers running a proprietary OS that I programmed in PL/N their in house variant. The communications ran through a weird custom card that talked SDLC (I think).

I was the whole tech staff, work-life balance was reasonable, as everything was done turning normal day-shift hours. There was quite a bit of driving, as ComEd's power plants are scattered across the Northern half of Illinois. I averaged 35,000 miles/year. It was one of the most rewarding times of my life, work wise.

The program was essentially a set of CRUD applications, and I wrote a set of libraries that made it easy to build editors, much in the manner of the then popular DBASE II pc database. Just call with X,Y,Data, and you had a field editor. I did various reports and for the most part it was pretty easy.

The only odd bit was that I needed to do multi-tasking and some text pipelining, so I wrote a cooperative multi-tasker for Turbo Pascal to enable that.

There weren't any grand design principles. I was taught a ton about User Friendliness by Russ Reynolds, the Operations Manager of Will County Generating Station. He'd bring in a person off the floor, explain that he understood this wasn't their job, and that any problems they had were my fault, and give them a set of things to do with the computer.

I quickly learned that you should always have ** PRESS F1 FOR HELP ** on the screen, for example. Russ taught me a ton about having empathy for the users that I carried throughout my career.


> It was one of the most rewarding times of my life, work wise.

Did you feel this way in the moment, or did you realize it when looking back?


I was standing outside the gates of Crawford Generating Station, when I realized that no matter what was wrong, when I was done with my visit, they were going to be happy. It was that moment of self actualization that doesn't often come around.

Looking back in retrospect I see how dead nuts simple everything was back then, and how much more productive a programmer could be, even with the slow as snot hardware, and without GIT. Programming has gone far downhill since then, as we try to push everything through the internet to an interface we don't control. Back then, you knew your display routines would work, and exactly how things would be seen.


Not the person you replied to, but I definitely felt that way in the moment. My success and enjoyment writing Clipper (a dBase III compiler) and Turbo Pascal applications for local businesses while I was in high school is the reason I went on to get a computer science degree at university.


Also not the person you replied to, but yes, in the moment. The feeling was that I couldn't believe someone would pay me for that.


It was awful. And it was great.

The awful part was C++. There were only two popular programming languages: C++ and Visual Basic. Debugging memory leaks, and memory corruption due to stray pointers and so on in C++ was a nightmare. Then Java came out and everything became easy.

The great part was everyone had offices or at least cubicles. No "open floor plan" BS. There was no scrum or daily standup. Weekly status report was all that was needed. There was no way to work when you're not at work (no cell phone, no internet), so there was better work-life balance. Things are definitely much worse now in these regards.

All testing was done by QA engineers, so all developers had to do was write code. Code bases were smaller, and it was easier to learn all there is to learn because there was less to learn back then. You released product every 2.5 years, not twice a week as it is now.


> There were only two popular programming languages: C++ and Visual Basic.

And COBOL. Vast, vast plurality of the business economy ran on COBOL. We also had mainframe assembler for when speed was required, but COBOL had the advantage of portability to both mainframe and minicomputer. Anything fast on the mini was written in C.

When I started we had a PC to use for general office tasks ( documents, e-mails and such ) and a 3270 or 5250 green-screen terminal for actual work. The desks groaned under the weight and the heat was ferocious. Overhead lockers were jam-packed with code printouts on greenbar and hundreds of useful documents. "Yeah I have that in here somewhere" and Bob would start to burrow into stacks of pages on his desk.

Cubicle walls were covered with faded photocopies of precious application flowcharts and data file definitions.

Updates to insurance regulations would arrive in the post and we were expected to take ownership and get them implemented prior to compliance dates. There was no agile, no user stories, no QA teams, no 360 reviews. Just code, test, release.

You knew who the gurus were because they kept a spare chair in their cubicles for the comfort of visitors.

Good times.


And don't forget Perl. :)


Pretty sure Pascal/Delphi was also popular until the early 2000s...


I remember Turbo Pascal 3.0, the one that generated COM files for MS-DOS (like EXE files, but about 2KB smaller).

I loved that Turbo.com and COM files 30 years ago!

later I started to use Turbo Pascal 5.5 with OO support and a good IDE.


Not only that, but Turbo Pascal was very efficient as a linker too, linking only library code that was actually used in the program, as opposed to Turbo C/C++ that would link the entire library. As a result, "Hello, World" was ~2KB for TP vs. ~15KB for TC. I may not remember the sizes correctly, but the difference was dramatic. Of course, for bigger programs the difference was a bit smaller. And it was fast!


Jeff Duntemann's book on Turbo Pascal is still one of my favorite texts of all time. He combined his enthusiasm for the subject with a deft hand at explaining concepts intuitively.

And of course, there was Peter Norton's Guide to the IBM PC. The bible, back then.


Still popular! Where's all my FreePascal nerds?


> The great part was everyone had offices or at least cubicles. No "open floor plan" BS. There was no scrum or daily standup. Weekly status report was all that was needed. There was no way to work when you're not at work (no cell phone, no internet), so there was better work-life balance. Things are definitely much worse now in these regards.

FWIW I have had all of these at every place I've worked, including my current job. Places like that are out there. If you're unhappy with your current job, there's never been a better time to move.


C++ was different on different operating systems (every compiler rolled his own template instantiation model). Portability was hard work.


And you downloaded the sgi stl.


Moving from C++ to Java in 1998 instantly made me twice as productive, as I was no longer spending half my time managing memory.

Together with starting pair programming in 2004, that is the biggest improvement in my work life.


> There were only two popular programming languages: C++ and Visual Basic.

Not really. Back in 1992 I was doing mostly C and second was perl, with shell scripting thrown in the edges.


I didn't start that long ago but at my first fulltime job I also had my own office. An unthinkable luxury compared to now. Also figuring out requirements on my own was nice. On the other hand I think work was much more isolated, the office was in the middle of nowhere. Also during that time it was still normal that every second project failed or became some sort of internal Vaporware. Functioning Management seemed almost non-existent.


PL/1 on Stratus and RPG on sys/[36|38] and AS/400 checking in! :-D


My first job was in 2010, not that long ago but still long enough to experience offices and no standups... definitely good times


Was Valgrind already a thing?


Valgrind was huge when it became available early in the 21st century, for finding leaks but also because and gave us engineers ammunition to use against management to keep our development systems running on Linux.

There were other profiling tools before then, but they were extremely pricey.


Bit surprised - just found out that Valgrind was first released in Feb 2002. It turns out, i've been using it since it's almost first days, 2002 for sure. Had no idea.


Good times. Most of us hadn't gotten "high on our own supply" yet, and the field was wide open.

You got the feeling of a thousand developers all running off in different directions, exploring the human and condition and all of the massively cool things this new hammer called "programming" can do.

Compare that to today. Anywhere you go in the industry, it seems like there's already a conference, a video series, consultants, a community, and so on. Many times there are multiple competing groups.

Intellectually, it's much like the difference folks experienced comparing going cross country by automobile in say, 1935 versus 2022. Back then there was a lot of variation and culture. There was also crappy roads and places you couldn't find help. Now it's all strip malls and box stores, with cell service everywhere. It's its own business world, much more than a brave new frontier. Paraphrasing Ralphie in "A Christmas Story", it's all just crummy marketing.

(Of course, the interesting items are those that don't map to my rough analogy. Things like AI, AR/VR, Big Data, and so on. These are usually extremely narrow and at the end of the day, just bit and pieces from the other areas stuck together)

I remember customers asking me if I could do X, figuring out that I could, and looking around and not finding it done anywhere else. I'm sure hundreds, maybe thousands of other devs had similar experiences.

Not so much now.


Lots of books! We had the internet but it wasn't very useful for looking up information about programming. We had usenet but it would take a while to get an answer, and often the answer was RTFM.

But what we did have were O'Reilly books! You could tell how senior an engineer was by how many O'Reilly books were on their shelf (and every cubicle had a built in bookshelf to keep said books).

I remember once when our company fired one of the senior engineers. The books were the property of the company, so they were left behind. Us junior engineers descended on his cubicle like vultures, divvying up and trading the books to move to our own shelves.

I still have those books somewhere -- when I got laid off they let me keep them as severance!


In addition to the ubiquitous ORA books (really, did anyone ever understand Sendmail config files before the bat book?) there were also a lot of print-outs. Huge swaths of code printed on 132-col fanfold paper. You might have a backup copy of the source code on tape somewhere, but nothing made you feel secure like having a copy of the previous working version printed out and stashed somewhere on your desk or in a drawer.


Lots of coding was done on those printouts also - you'd print out your function or your program and sit back and mark it up - especially if you were discussing or working with someone. Screens were small back then!


oh these books

there are still pretty much all around 'old' IT companies, displayed in shelves and bookcases, as artifacts that explains what were old languages and systems.

I love the retro futuristic vibe of the cover of some of these. And of their content. They invite the reader to leap into the future with bash, explained how Linux used to work, how past versions of .NET and Java were breakthroughs, how to code with XML,...

As a junior who has hardly read any of these, I find them pretty poetic, and I like the reflection they bring on IT jobs. The languages and technologies will change, but good looking code is timeless


Fun!

Precarious. Very slow. Like a game of Jenga, things made you nervous. Waiting for tapes to rewind, or slowly feeding in a stack of floppies, knowing that one bad sector would ruin the whole enterprise. But that was also excitement. Running a C program that had taken all night to compile was a heart-in-your-mouth moment.

Hands on.

They say beware a computer scientist with a screwdriver. Yes, we had screwdrivers back then. Or rather, developing software also meant a lot of changing cables and moving heavy boxes.

Interpersonal.

Contrary to the stereotype of the "isolated geek" rampant at the time, developing software required extraordinary communication habits, seeking other experts, careful reading, formulating concise questions, and patiently awaiting mailing list replies.

Caring.

Maybe this is what I miss the most. 30 years ago we really, truly believed in what we were doing... making the world a better place.


In 1992 I had an office with a door, a window, a Sun workstation, lots of bankable vacation time, a salary that let me buy a house in a nice neighborhood, and coded in a programming language that I could completely understand for a machine whose instruction set was tiny and sensible. Now I WFH with a really crappy corp-issued Windows laptop that's sluggish compared to that Sun, haven't had a real vacation in years despite "unlimited" PTO, and have to use a small safe subset of C++ to code for chips whose ISAs have bloated into behemoths. On the plus side, I now have git, instead of Cray's UPDATE, whose underlying metaphor was a box of punched cards.


I'm sad I missed the office-with-door era. I started working in 2011, and my first company still had some real offices, but you had to be pretty senior. I made it to that level right around the time they got rid of them :-( and no other place I've worked at has had them for anyone but execs / senior managers / sales and the like.

To date the best working condition I got to actually experience instead of just watch others have it, was when I interned at a federal government agency in 2009 and had a real cubicle (and an electric standing desk!) -- the same thing Dilbert comics were complaining about so hard in the 90s, but far better than the open office pits that replaced them.

The best part about offices with doors is you didn't need to worry about formal schedules and meeting rooms as much, because anyone could have an ad hoc meeting in their office any time.


What happened to "that era"? Why did we get rid of it


The price of commercial real estate happened.


Paradoxically, I think that while pay is generally better these days, opportunities were in some ways better back then. If you learnt some good skills (e:g C++, X/Motif, Unix such as Solaris or HP-UX, TCP/IP and other networking) back then, that'd give you a serious technical edge and good (international) job opportunities for quite some time, whereas nowadays its harder for anyone, including younger ones, to stay on top of skills and stand out from the pack. No stand-ups back then. Personally I prefer that now we have agile ceremonies and we actually, you know, talk to each other and collaborate. Back then you could be totally unsociable (desire no interaction at all) or anti-social (sexist etc) at work and get away with it. Good thing that wouldn't fly now, or at least not to same extent. I even worked with someone that had a porn stash on their work computer. Management didn't seem to know or wouldn't have cared if they did. Its kind of mind -boggling when you think that'd just be instant dismissal now (hopefully, in any sane organisation). Work-life balance I think is better now, as flexible working is easier to get, and with WFH. That said, when I was young, in UK, I had no trouble getting a month off to go travelling . (albeit, did request that vacation almost a year in advance). We used to do Rapid Application Development /. iterative design, which resembles agile. Nothing new under the sun, as the saying goes. Appraisal / performance was probably better for a lot of people those days. It pretty much meant once a year pull out your objectives, find out what you worked on had entirely changed, so "reverse engineer" your objectives to be a list of the things you'd achieved for the year, pat on the back, meagre pay rise, but far less of the rivalry and performance nonsense that seems to pervade particularly US companies these days.


> now we have agile ceremonies and we actually, you know, talk to each other and collaborate

I'm not gonna get on agile ceremony topic right now, but I think this is entirely a function of where you were. All of the greatest environments I've ever worked in were more than 20 years ago. Everyone was genuinely into the field. Hacking in the weird offices full of prog and myriad metal genres...That wasn't my archetype, but most were, and I had a great time hanging out in that world.


Oh certainly - the world was so desperate for computer people and there was no time to build up college courses for this stuff (it was way too new) that anyone who could do anything computer-related could easily get a decent job. This lasted until the 2000 crash, but in the right areas continued on for quite awhile.


You could get away with "I know HTML" and suits would call you a "Programmer!"


As someone who started working very young, and who still has dark hair that helps pass IC interviews with 22yo hiring managers, :) I offer one person's boots-on-the-ground perspective from the two eras...

The biggest change is that software become a high-paying, high-status job. (In parallel, "nerd" became a cool thing to be, maybe for not entirely for different reasons.)

I still remember when the dotcom boom started, and I first saw people claiming to be experts on Web development (and strutting about it) while wearing what I assumed were fashionable eyeglasses. I had never seen that before.

Before software was a high-status job -- although in many areas it was dominated by men (maybe because a lot of women were homemakers, or discouraged from interests that led to software) -- there were a lot of women, and we're only recently getting back to that.

About half my team one place was female, and everyone was very capable. Software products were often done by a single software engineer, teamed up with a marketing product manager and technical documentation, and often with a test technician/contractor. To give you an idea, one woman, who had a math degree, developed the entire product that instrumented embedded systems implemented in C, for code path coverage and timings, and integrated with our CASE system and our in-circuit emulator. Another of my mentors was a woman who worked 9-5, had kids, and previously worked on supercomputing compilers (for what became a division of Cray) before she came to work on reverse-engineering tools for our CASE system.

Another mentor (some random person from the Internet, who responded to my question about grad school, and she ended up coaching me for years), was a woman who'd previously done software work for a utility company. Back when that meant not just painting data entry screens and report formats, but essentially implementing a DBMS and some aspects of an operating system. And she certainly didn't get rich, wielding all those skills, like she could today.

Somewhere in there, software became a go-to job for affluent fratboys, who previously would've gone into other fields, and that coincided with barriers to entry being created. Which barriers -- perhaps not too coincidentally -- focused on where and whether you went to college, rather than what you can do, as well as "culture fit", and rituals not too far removed from hazing. Not that this was all intentional, and maybe it's more about what they knew at the time, but then it took on a life of its own.


I was doing Turbo Pascal, C and assembly back then. What I liked better, at least where I was and worked, what you had in your head was it. There were libraries, but, at least where I worked, not very many and it was tedious to get them, so you just rolled your own.

Focus was more (again, for me) on doing clever stuff vs, what I do today, integrating stuff and debugging stuff made by other people.

Compensation was lower than it is now (obviously?), but it was, similar to now, higher than most jobs. 30 years ago, unlike 40 years ago, you already could get jobs/work without a degree in software, so it was life changing for many, including me, as I was doing large projects before and in university which gave me stacks of cash while my classmates were serving drinks for minimum wage and tips.

I guess the end result these days is far more impressive in many ways for the time and effort spent, but the road to get there (stand ups, talks, agile, bad libraries/saas services, fast changing eco systems, devops) for me has mostly nothing to do with programming and so I don’t particularly enjoy it anymore. At least not that part.

Process reflected that; our team just got a stack of paper (brief) and that’s it; go away and implement. Then after a while, you brought a cd with the working and tested binary, went through some bug fixes and the next brief was provided.

One of the stark differences I found, at least in my country (NL), is that end 80s, beginning of the 90s, almost all people told me; you program for 5-8 years and then become a manager. Now, even the people who told me that at that time (some became, on eu scale, very prominent managers), tell me to not become a manager.


    Process reflected that; our team just got a stack of 
    paper (brief) and that’s it; go away and implement. 
    Then after a while, you brought a cd with the working 
    and tested binary, went through some bug fixes and 
    the next brief was provided.
Oh my god I wish I could live in this world for a little while. Like you, I'm not sure I enjoy this any more.


>tell me to not become a manager

Why though?


Take all these stories with a grain of salt because one of the key elements for the things being better (at least for for me) 30 years ago is realistically the fact that I had 30 years less back then. So everything was new to me, I had a lot more time (I was at the end of the high-school when I started programming for money), and most importantly I had a lot more energy (and less patience, but energy made up for it).

Realistically, it was way harder. Tools were rough, basically just text editors, compiling was slow and boring, debugging tools super primitive compared to todays. And to learn programming back then, especially making the first steps you had to be really motivated for it. There was no Internet the way we know it today, so you had to dig a lot to get the info. Community was small and closed, especially to complete newbies, so you had to figure out all of the basic steps by yourself and to sort of prove yourself. In the beginning you had to really read books and manuals cover to cover many times - and then again later you'd end up spending hours turning pages, looking for some particular piece of info. It was supper time consuming, I spent nights and nights debugging and figuring how to call some routine or access some hardware.

On the other hand it had that sense of adventure, obtaining some secret knowledge, like some hermetic secret society of nerds that you've suddenly become initiated into, which to high-schooler me was super exciting.


I loved working in the 90s. Things were less “professional “ and project management often almost didn’t exist. In my first job I was presented a problem and told to come back in a few months. No standups, management was all engineers.

I also feel back then software wasn’t viewed as a well paid career so you had more people who really wanted to do this because they were interested. When I look around today there are a lot of people who don’t really like the job but do it because it pays well.

It was also nice to not have to worry much about security. If it worked somehow, it would as good. No need to worry about about vulnerabilities.


100% agree, I had the same experience


Came here to see the "Old Timers" reply... then realized my first coding job was 1993 (29 yrs).

I recall the big disruptors being a transition to GUIs, Windows, and x86 PCs. DOS and command line apps were on the way out. Vendors like Borland offered "RAD" tools to ease the Windows boilerplate. Users were revolting and wanted to "mouse" over type.

The transition from C to C++ was underway. The code I worked on was full of structs and memory pointers. I was eager to port this to classes with a garbage collector, but there were vtable lookup and performance debates.

Ward's Wiki was our community platform to discuss OOP, design patterns, and ultimately where Agile/XP/SCRUM were defined. https://wiki.c2.com/

Work was 100% 9am-5pm Mon-Fri in the office. It was easier to get in the flow after hours, so 2-3 days per week involved working late. With PCs, it was also easier to program and learn at home.

Comp was ok, relative to other careers. I recall by 1995 making $45K per year.


My first job, in the early-90s, paid $25K/year. I thought I was rich. LOL


I started my career 30 years ago at Apple. There was no internet, no option to work from home. We wrote software that would eventually be burned onto disks. Whatever we used for source code control was quite primitive. Seeing your work go "live" would take months. We were working on cutting-edge stuff (speech recognition in my case) in an environment buzzing with energy and optimism about technology.

Compensation was average: there were no companies with inflated pay packages, so all my engineering friends were paid about the same. Friendly (or not) rivalries were everywhere: Apple vs IBM vs Next vs Microsoft. I'd grown up imagining Cupertino as a magical place and I finally got to work in the middle of it. After the internet, the get-rich-quick period launched by Netscape's IPO, the business folks took over and it's never been the same.


I heard there was a big tech “cartel” in the software engineering labor market. Microsoft wouldn’t hire people from apple and vice versa. That made salaries pretty average for everyone and definitely lower than it should’ve been.


30 years ago? I remember:

1) Burn and crash embedded development – burn EPROMs, run until your system reset. Use serial output for debugging. Later, demand your dev board contains EEPROM to speed up this cycle.

2) Tools cost $$$. Cross-compilers weren’t cheap. ICE (in-circuit emulation) for projects with a decent budget.

3) DOS 5.0 was boss! Command line everything with Brief text editor with text windows.

4) Upgrading to a 486dx, with S3 VGA – Wolfenstein 3D never looked so good!

5) The S3 API was easy for 1 person to understand. With a DOS C compiler you could roll your own graphics with decent performance.

6) ThinkPad was the best travel laptop.

7) Sharing the single AMPs cellphone with your travel mates. Service was expensive back then!

8) Simple GANTT charts scheduled everything, +/- 2 weeks.

9) You could understand the new Intel processors – i860 and i960 – with just the manuals Intel provided.

10) C Users Group, Dr Dobbs, Byte, Embedded Systems Journal and other mags kept you informed.

11) Pay wasn’t great as a software dev. No stock options, bonus, or overtime. If your project ran late you worked late! A few years later the dot com boom raised everyone’s wages.

12) Design principles could be as simple as a 1-page spec, or many pages, depending on the customer. Military customers warranted a full waterfall with pseudo-code, and they paid for it.

13) Dev is much easier today as most tools are free, and free info is everywhere. However, system complexity quickly explodes as even “simple” devices support wireless interfaces and web connectivity. In the old days “full-stack” meant an Ethernet port.


> DOS 5.0 was boss! Command line everything with Brief text editor with text windows.

I occasionally miss Borland Turbo C. These days people would regard it as magic to get a whole IDE and compiler and set of system headers into a couple of megabytes.


30 years ago was a great time, software development was really ramping up. The internet was like a "secret" which not many people knew about, dominated by ftp / usenet / irc/ ftp / gopher / telnet / email. Books and magazines were a BIG thing in the world of software development, they were your main sources of information. Processes were a mixed bag of things, mostly based around waterfall type ideas but most places just did things ad hoc. OO was in its infancy. There was a lot of thinking around "modelling" as a way to create software instead of code. There was a lot of influence from structured programming. We were on the borderlines between DOS and windows, so a lot of stuff was just single threaded programs. There was still a lot of the cool 80s computers around.. amigas, ataris, etc. Apple was off on the side doing its own things as it always has, mostly the thing you associated with it was desktop publishing. Pay is probably better now, I think, work life balance is the same if you go into a workplace, and better if you work from home. While the 80s/90s was fun and exciting, things are really good now.


FOR ME, 30 YEARS AGO:

  - 1 language (DATABASIC) Then it did everything. I still use it now, mostly to connect to other things.
  - 1 DBMS (PICK) Then it did everything. I still use it now, mostly as a system of record feeding 50 other systems.
  - Dumb Terminals. Almost everything ran on the server. It wasn't as pretty as today, but it did the job, often better. Code was levels of magnitude simpler.
  - Communication with others: Phone or poke you head around the corner. No email, texts, Teams, Skype, social media, Slack, Asana, etc., etc., etc 1% of the interruptions.
  - Electronic communication: copper or fiber optic. Just worked. No internet or www, but we didn't need what we didn't know we would need someday. So simple back then.
  - Project management. Cards on the wall. Then we went to 50 other things. Now we're back to cards on the wall.
  - People. Managers (usually) had coded before. Users/customers (usually) had done the job before. Programmers (usually) also acted as Systems Analyst, Business Analyst, Project Manager, Designer, Sys Admin, Tester, Trainer. There were no scrum masters, business owners, etc. It was waterfall and it (usually) worked.
MOST IMPORTANTLY:

  - 1992, I spent 90% of my time working productively and 10% on overhead.
  - 2022, I spend 10% of my time working productively and 90% on overhead.

  Because of this last one, most of my contemporaries have retired early to become bartenders or play bingo.

  1992 - It was a glorious time to build simple software that got the customer's job done.
  2022 - It sucks. Because of all the unnecessary complications, wastes of time, and posers running things.

  Most people my age have a countdown clock to Social Security on their desktop. 30 years ago, I never could have imagined such a state would ever exist.


> 2022, I spend 10% of my time working productively and 90% on overhead

Is it because the nature of work/programming has changed? Or now you're in a more "leadership/managerial" position that requires you to manage people and ergo feels like overhead.


It is because the nature of work/programming had changed.

I got sucked into "leadership/managerial" a few times but quickly escaped.

I just want to build fricking software! It's the coolest thing ever and I was born to do it.

Now I have to do it after hours on my own because I'm so damn busy in meetings all day long.


One thing you almost -never- saw then was this:

"Hey guys, check out this thing X I made with absolutely no reason other than to see if I could and to understand Y problem better"

replies:

- "idk why anybody would do this when you could use Xify.net's free tier"

- "you could have simply done this with these unix tools and a few shell scripts, but whatever creams your twinkie"

- "just use nix"

Instead what we had were cheers, and comments delayed while people devoured the code to see what hooked you in, and general congratulations- most often this lead to other things and enlightened conversations.

Everything's always been a 'competition' so to speak, but we weren't shoving eachother into the raceway barriers like we do now on the way to the finish line. There was a lot more finishing together.


I think much of that really depends on what kind of company/department you worked for, like I'd guess it does now.

Apart from that, there were more constraints from limited hardware that forced you to get creative and come up with clever solutions for things that you can now afford to do in a more straightforward (or "naive") manner. It helped that you could (and usually did) pretty much understand most if not all of the libraries you were using. Chances are you had a hand in developing at least some of them in the first place. Fewer frameworks I'd say and fewer layers between your own code and the underlying hardware. Directly hacking the hardware even, like programming graphics card registers to use non-standard display modes. And I'd guess the chance that your colleagues were in it for the magic (i.e. being nerds with a genuine passion for the field) more than for the money were probably better than now, but I wouldn't want to guess by how much.

Oh, and the whole books vs internet thing of course.


Re >> "I think much of that really depends on what kind of company/department you worked for"

Reminds me of: "I update bank software for the millennium switch. You see, when the original programmers wrote the code, they used 2 digits instead of 4 to save space. So I go through thousands of lines of code... you know what, I hate my job, I don't want to talk about it." ~ Peter Gibbons

I wasn't programming in the workplace 20-30 years ago, but I believe you're right when you say: Depends on the company/department.


30 years ago. Man. I was working at a OpenVMS shop, cranking out DCL code and writing mainly in FORTRAN. Books and manuals littered my wee little cubicle. I had a vt220 and vt420 terminals because Reflections rarely worked correctly on my hardly used PC. I also had a terminal to a HP 3000 system,, running MPE, and had to do code review and testing on a app that was written in BASIC!

Version control was done using the features of the VMS filesystem. I believe that HP MPE had something like that also, but I may have blocked it out.

Around about late '93 early '94 they hauled the HP terminal away and slapped a SparcClassic (or IPX? IPC?) in it's place. I was tapped to be part of the team to start migrating what we could off the VMS system to run on Solaris. So, I had to learn this odd language called See, Sea, umm 'C'?

A whole need set of manuals. A month's salary on books. Then another few books on how to keep that damn Sparc running with any consistency.

Then had to setup CVS. Sure, why not run the CVS server on my workstation!

By the end of '95 I was working mainly on maintaining the Solaris (and soon HP/UX and AIX) boxes then programming.

I still miss writing code with EDT on VMS and hacking away on fun things with FORTRAN. You know, like actually writing CGIs in FORTRAN. But that is another story.


I was working in Pascal, C and assembly about 30 years ago, mostly in DOS and Windows 3.

By 1995 I started dabbling with websites, and within a couple of years was working mostly with Perl CGI and some Java, on Windows and Linux/NetBSD.

Most of my work was on Windows, so that limited the available Perl libraries to what would run on ActiveState's Perl.

I gave up trying to do freelance because too many people didn't seem to understand the cost and work involved in writing software:

- One business owner wanted to pay be US $300 to fix some warehouse management software, but he'd up it to $500 if I finished it in one month.

- A guy wanted to turn his sports equipment shop into an e-commerce website, and was forward thinking... except that none of his stock of about 20,000 items was in a database and that he could "only afford to pay minimum wage".

I interviewed with some companies, but these people were clueless. It seems like a lot of people read "Teach yourself Perl in 7 days and make millions" books. The interview questions were basically "Can you program in OOP with Perl?".

I got a proper developer job on a team, eventually. They were basically happy that I could write a simple form that queried stuff from a database.

Some other people on my team used Visual Basic and VBScript but I avoided that like the plague. I recall we had some specialized devices that had their own embedded versions of BASIC that we had to use.

When Internet Explorer 4 came out that we started having problems making web sites that worked well on both.

Web frameworks didn't exist yet, JavaScript was primitive and not very useful. Python didn't seem to be a practical option at the time.


Actually, I should answer the questions:

> ... terms of processes, design principles, work-life balance, compensation. Are things better now than they were back then?

We didn't have an official process or follow any design principles. There were small teams so we simply had a spec but we'd release things in stages and regularly meet with clients.

I had a decent work-life balance, a decent salary but wasn't making the big dot.com income that others were making.

I think overall things are better, technology-wise as well as some awareness of work-life balance, and more people are critical of the industry.

The technology is more complicated, but it does a lot more. The simplicity was largely due to naivety.


The core difference was that the job required much more vertical reasoning. Crafting things from the ground up was the norm. Starting from a blanc code file and implementing core data structures and algoritms for the domain was oftem the case. Limited resources required much more attention to efficiency and thight constraints. Much weaker tooling required more in depth knowledge rather than trial and error development. There also was no web nor google so finding things out was either books or newsgroups.

These days often the demand is more horizontal. Stringing together shallow understood frameworks, libraries and googled code and get it to work by running and debugging.

The scope of things you can build solo these days is many orders of magnitude larger than it was back then.

Still, the type of brainwork required back in the day most definetly was more satisfying, maybe because you had more control and ownership of all that went into the product.


> The scope of things you can build solo these days is many orders of magnitude larger than it was back then.

This is the most positive phrase on this whole page.

It's easy to get stuck in nostalgia but harder to realise how good we have it now. Thanks.


It was very process heavy in my experience. Because of the available technology development was slow and costly, so the thought was to put a lot of process around development to minimize the chances of projects going off the rails.

We were also in the "object nirvana" phase. Objects were going to solve all problems and lead us to a world of seamless reusable software. Reusability was a big thing because of the cost. Short answer: they didn't.

Finally I am astonished that I'm routinely using the same tools I was using 30 years ago, Vim and the Unix terminal. Not because I'm stuck in my ways, it because it is still state of the art. Go figure.

I'd never go back. The 90's kind of sucked for software. Agile, Git, Open source, and fast cheap computers have turned things around. We can spend more time writing code and less time writing process documents. Writing software has always been fun for me.


What's amazing about the major improvements you list there (besides the cheap computers) is that many of them could easily have been done on 80s/90s equipment. We had rcs and cvs but most people just had a batch file that made a copy of the working directory, if anything.

But then a lot of software was relatively "simple" back then, in terms of total lines of code; most everything today is a massive project involving many developers and much more code. Necessity brought around things like git.


I missed a big change in my list which is the Internet. We were using the internet in 1992. I remember FTP'ing source code from machines in Finland from here in Silicon Valley. I knew it was going to change the world.

Ubiquitous networking means that new developments are now instantly available instead of having to wait for the publishing cycle. People can share instantly.

Open source changed everything. In 1992 you'd need to get your accounting dept to cut a PO and then talk to a sales person to get new software. Most software being freely available today has been like rocket fuel for the industry. If you'd told me in 1992 that in 30 years most software would be distributed open source, and that the industry as a whole is making much more money, I'd have said you were crazy.

That all said, this particular conversation would have been occurring over Usenet in 1992 using almost exactly the same format.


Yeah it's absolutely amazing to think of the huge, HUGE names in the computer world from the 90s whose only product was something that is now entirely open source and given away for free.

I'm talking not only things like Netscape Navigator but all the server software vendors, etc. And even where things still exist (the only real OS manufacturer anymore is Microsoft and Apple, everything else is a Linux) they've been driven free or nearly so. Windows 95 was $209.95 on launch ($408.89 in today's dollars) but a boxed copy of Windows 11 is $120 - and MacOS is now "free".


> We had rcs and cvs but most people just [...]

To be fair RCS and CVS sucked. I remember trying to use CVS in the early 2000s when SVN wasn't even out yet, and if I remember the experience correctly, even today I might be tempted to just write a batch file to take snapshots...


Branching in RCS and CVS was rather confusing.


I worked at IBM as a student intern in the mid 80s and at the Royal bank of Scotland.

Process at these companies was slow and waterfall. Once worked with a group porting mainframe office type software to a minicomputer sold to banks; they had been cranking out C code for years and scheduled to finish in a few more years, and the developers were generally convinced that nothing would ever be releasable.

The people were smart and interesting - there was no notion of doing software to become rich, pre-SGI and pre Netscape, and they all were people who shared a love of solving puzzles and a wonder that one could earn a living solving puzzles.

IBM had a globe spanning but internal message board sort of thing that was amazing, conversations on neurology and AI and all kinds of stuff with experts all over the world.

I also worked at the Duke CS lab around 1990, but it was hard to compare to companies because academia is generally different. People did the hard and complex stuff that they were capable of, and the grad students operated under the thumb of their advisor.

Wages were higher than for example secretarial jobs, but not life altering for anyone, but people didn’t care so much.


I started my first real job in 1995 after dropping out of grad school. I was paid about $40K (which seemed like SO MUCH MONEY) and worked at a tiny startup with one other full-timer and two part-timers writing Lisp on Macs for a NASA contract. We had an ISDN line for internet; I mostly remember using it for email/mailing lists and usenet (and reading macintouch.com). We had an AppleTalk LAN, and IIRC one developer workstation ran Apple's MPW Projector for source control, which Macintosh Common Lisp could integrate with.

Our office was in a Northwestern University business incubator building and our neighbors were a bunch of other tech startups. We'd get together once a month (or was it every week?) to have a drink and nerd out, talking about new technology while Akira played in the background.

It was awesome! I got to write extremely cool AI code and learn how to use cutting edge Mac OS APIs like QuickDraw 3D, Speech Recognition, and Speech Synthesis. Tech was very exciting, especially as the web took off. The company grew and changed I made great friends over the next 7 years.

(Almost 30 years later I still get to write extremely cool AI code, learn new cutting edge technologies, find tech very exciting, and work with great people.)


1995, depends on what stack you were on.

The DOS/Windows stack is what I worked on then. Still using floppies for backup. Pre-standard C++, Win16, Win32s if it helped. Good design was good naming and comments. I was an intern, so comp isn't useful data here.

Yes, things are much better than then. While there were roots of modernism back then, they were in the ivory towers / Really Important People areas. Us leeches on the bottom of the stack associated all the "Software Engineering" stuff with expensive tools we couldn't afford. Now version control and test frameworks/tools are assumed.

Processes didn't have many good names, but you had the same variation of some people took process didactically, some took it as a toolbox, some took it as useless bureaucracy.

The web wasn't a big resource yet. Instead the bookstores were much more plentiful and rich in these areas. Racks and racks of technical books at your Borders or (to a lesser degree) Barnes & Noble. Some compiler packages were quite heavy because of the amount of printed documentation that came with them.

Instead of open source as we have it now, you'd go to your warehouse computer store (e.g. "Soft Warehouse" now called Micro Center) and buy a few cheap CD-ROMs with tons of random stuff on them. Fonts, Linux distros, whatever.


Floppies are under appreciated just how easy and cheap they were. Everyone had a box of (if you were fancy, preformatted) floppies next to their computer, and throwing one in and making a copy was just something you did all the time, and giving the floppy to someone to have and to keep was commonplace. That didn't really get replaced until CD burners became cheap, then everyone had a stack of blank CDs next to them, but they were not as convenient. It was easy to make backups and keep them offline and safe (though people would still not do it at times).

Even today, most people do NOT have a box of USB sticks that they can give away - they can use one to transfer something but you'll want the stick back. Throwing something on the internet and sending a download link is the closest we have, and it has some advantages, but it's not the same.


I recall the time in the 90s when some friends and I left a floppy of Doom as a tip for our waiter.


Thirty years ago was 1992. I was an employee of Apple Computer.

I had spent a year or two working on a knowledge-based machine-control application that Apple used to test prerelease system software for application-compatibility problems. A colleague wrote it in Common Lisp, building it on top of a frame language (https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...) written by another colleague.

The application did its job, using knowledge-based automation to significantly amplify the reach and effectiveness of a small number of human operators, and finding and reporting thousands of issues. Despite that success, the group that we belonged to underwent some unrelated organizational turmoil that resulted in the project being canceled.

I interviewed with a software startup in San Diego that was working on a publishing app for the NeXT platform. I had bought a NeXT machine for learning and pleasure, and had been tinkering with software development on it. A bit later, another developer from the same organization at Apple left and founded his own small startup, and, knowing that I had a NeXT cube and was writing hobby projects on it, he contracted with me to deliver a small productivity app on NeXT. In the course of that work I somehow found out about the San Diego group and started corresponding with them. They invited me to interview for a job with them.

I liked San Diego, and I really liked the guys at the startup, and was very close to packing up and moving down there, but then the Newton group at Apple approached me to work for them. The Pages deal was better financially, and, as I say, I really liked the people there, but in the end I couldn't pass up the chance to work on a wild new hardware-software platform from Apple.

In the end, Newton was not a great success, of course, but it was still among the most fulfilling work I've ever done. Before I was done, I had the opportunity to work on a team with brilliant and well-known programmers and computer scientists on promising and novel ideas and see many of them brought to life.

On the other hand, I also overworked myself terribly, seduced by my own hopes and dreams, and the ridiculous lengths that I went to may have contributed to serious health problems that took me out of the workforce for a couple of years a decade later.

But 1992 was a great year, and one that I look back on with great fondness.


You do wonder about Newton... But for the presence of better infrastructure (esp home networking, wifi, and 'fast enough' cellular), could Newton have become the Next Big Thing in computing after Macintosh? I suspect that Jobs' learning first-hand about the need for essential infrastructure was essential to the iPhone successfully taking flight 15 years after (esp due to 3G).


In an absolute sense, it's a lot better now, but in a relative sense it's a lot worse.

30 years ago the production values on software were a lot lower, so that a single programmer could easily make something that fit in with professional stuff. My first video game (written in turbo pascal) didn't look significantly worse than games that were only a few years old at the time. I can't imagine a single self-taught programmer of my talent-level making something that could be mistaken for a 2018 AAA game today.

The other major difference (that others have mentioned) is information. It was much harder to find information, but what you did end up finding was (if not out-of-date) of much higher quality than you are likely to get from the first page of google today. I can't say if it's better or worse; as an experienced programmer, I like what we have today since I can sift through the nonsense fairly easily, but I could imagine a younger version of me credulously being led down many wrong paths.


> I can't imagine a single self-taught programmer of my talent-level making something that could be mistaken for a 2018 AAA game today.

They might not look like 2018 AAA games, but there are so many indie games that vastly eclipse AAA titles in every other way... even "double A" or smaller big studio titles can be better than AAA titles if you judge by things other than flashy graphics.

And it's gotten much easier to make a game if you're willing to use something other than ultra-real graphics - Unity, for instance, has massively improved the ability for relative amateurs to make great engaging games.

Thinking of the games I play most days now, only one of them (Hunt: Showdown, by Crytek) is anything approaching triple A. I haven't touched a big budget big studio game in years.

So I think at least in gaming, things have gotten better, not worse. Yeah, the ceiling has been raised, but the floor has also dropped such that the level of effort required to make a decent game is lower than ever.


That's fair; I'm too busy to game these days, so I am certainly not in touch with what gamer's expectations are. I was just thinking that e.g. single-screen puzzle platformers were easy to make 30 years ago and wouldn't be too far off from something that would be "cool"


I would like to know! I can go back 22 years. Then, jobs were more likely to be apps running on Windows. People yelling at screen because things aren’t rendering properly (render as in pixels, not virtual doms!). No unit tests. Sourcesafe (old buggy but simple to use VCS). You could exclusively lock a file to annoy other developers and stamp the importance of your work. No scrum and much less process. 9-5 ish and no time tracking. No OKR or KPI. Do everything with Microsoft tooling. No open source tooling. Someone’s job to build an installer and get it burned to a CD (the optical storage medium). There was some automated testing but no unit test or CI/CD. Not so many “perks” like swazzy office, toys, food supplies etc. If there was webdev it would be in ASP or ActiveX!


That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then. Still, Slackware was 7 years old by the year 2000. Running the 2.2 Linux kernel, compiled with open source GCC. Websites were cgi-bin and perl. Yeesh I've been running Linux a long time...

On the windows side, NSIS was an open source piece of tooling released that year. And I was writing Windows programs in Visual Studio with MFC.


> That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then.

Running servers on Windows? Yeah, a few people who didn't know better did that, but it would be completely inaccurate to describe Windows as "completely dominant". It ruled the desktop (and to a large extent still does), but it barely made it to parity with *nix systems on the server side before Linux (and FreeBSD in some cases) punched down.


A few people?

IIS had 37% market share by 2000.

https://www.zdnet.com/article/how-does-iis-keep-its-market-s...


Yep, that's a few people. That was about it's peak market share, until a brief spike circa 2017 and then it crashed and burned into obscurity.


I imagine the crash caused by not needing it anymore to run .NET applications.


A third is a “few”?


No, a third is just not "utterly dominant".


It entirely depends on what you are counting, but I do think your comment is extremely misleading because Microsoft was important for business web servers in 2000. “a few people who didn't know better did that” is outright deceptive.

  The dominant position of Microsoft’s proprietary IIS in the Fortune 500 makes Windows NT a lock for the most used operating system undergirding the Web servers -- 43 percent. But the idea that Sun Microsystems Inc.’s Internet presence is weakening isn’t supported by the numbers. Sun’s Solaris holds a clear second place at 36 percent, with all other operating systems falling into the noise level. Linux showed up at only 10 companies.
That quote is from https://esj.com/articles/2000/06/14/iis-most-used-web-server...

It is fair to say that in 2000 Linux was beginning its growth curve for web servers, and all other OS’s were starting their decline. I do note the Fortune 500 had a lot fewer tech companies back then (zero in the top 10) and churn has increased a lot (perhaps due to not following technological changes): “Fifty-two percent of the Fortune 500 companies from the year 2000 are now extinct.”, “Fifty years ago, the life expectancy of a Fortune 500 brand was 75 years; now it’s less than 15”.


22 years ago I was programming on an almost entirely open source stack. Linux servers, vim, Perl and we paid for sybase. We used CVS for source control and when I heard about sourcesafes restrictions I was shocked.

We had unit tests, though it was your own job to run them before merging. If you broke them you were shamed by the rest of the team. We also had a dedicated lab for automating functional tests and load testing using Mercury interactives tooling (don’t miss that) that we would use to test out before upgrading our servers.

We used the techniques outlined in Steve McConnell’s Rapid Development, a sort of proto-agile (and editorializing it got all the good parts right while scrum did the opposite).


I had all of this 11 years ago and it was BLISS. Oh MS source safe and it's locked files! No merge conflicts or rebasing clownery, ever! It forced two people working on the same code to sync and this avoided so many conflicts. Customers called with small bug reports, I could fix them in 5 minutes and deploy to production right from eclipse.

Modern agile development is hell.


Nothing's stopping you from implementing file locking on a social level!

"Hey I'm gonna be working in the foo/ subdir this week, mind staying out of there till next week?"


Random agile comment at the end of a comment on source control.


I agree with a lot of that - I had to 'invent' unit tests for one of my clients for example for their production code.

I managed to swerve MS tooling much of the time, one way or another. For example, I worked a lot with Sun workstations SunOS/Solaris.


Thats nice. I think MS programming stacks were most popular in the UK outside of universities (universities would also have Unix, Oracle DB and SunOS). I guess in California it would more likely skew Unix/Sun?


I (a) ran a very early Internet provider and then worked in (b) oil and (c) finance where good networking, speed and reliability were enough to make *nix a sensible choice. Though (for example) the finance world tried to move to MS to save money, and indeed I got paid a lot to port and maintain and optimise code across platforms including MS, the TCO thing would keep biting them...


That makes me wonder, without all the unit tests and all the 'necessary' things we do to our codebase, did any of it really help?

Are modern codebases with modern practices less buggy than the ones from 20 years ago?


In 1988, Airbus delivered the first A320 with electronic flight commands, which was safe.

In 1998, the RATP inaugurated line 14 of the metro in Paris, which was fully automated, after formally proving that its software would never ever be able to bug.

Gitlab didn't exist back then, and yet these companies made a code that was safe.

I guess the main driver of code quality is whether the company cares, and has the proper specifications, engineering before coding, and quality management procedures, before the tech tooling.

It certainly is simpler now to make quality code. But don't forget that software used to be safe, and it was a choice of companies like Microsoft, with Windows, or more recently Boeing with the 737 Max, to let the users beta test code and patch it afterwards (Aka early, reckless agile)

So yeah, modern codes look less buggy. But it's mainly because companies care IMO.


> It certainly is simpler now to make quality code.

Just think of the log4j fiasco last year. Or the famous left-pad thing. Perhaps you don't import any dependencies, but just imagine the complexity of (for example) the JVM. Point is, you can surely write "quality code", but even with quality code it's much harder to control the quality of the end product.

Requirements have gotten more complex too. 30 years ago people were generally happy with computers automating mundane parts of a process. These days we expect software to out-perform humans unsupervised (self-driving?). With exploding requirements software is bound to become more and more buggy with the increased complexity.


> I guess the main driver of code quality is ...

picking a task for which can be implemented using the sort of processes you describe.

Lots of things cannot be.


Quality assurance and software engineering can be applied everywhere, no matter the processes you use to create and deliver the code.

Methods and tools would be different, depending on context, but ANY serious company ought to do quality management At the very least, know your code, think a few moves ahead, make sure you deliver a safe code, and apply some amount of ISO9001 at the company level (and hopefully much more at any other level)

Also, a security analysis is mandatory for both industrial code and for IT applications, thanks to standards, laws like the GDPR its principle of privacy by design, and contractual requirements from serious partners. You risk a lot if your code leaks customer data or crashes a plane.

it's the same for having 'specifications'. Call them functional and safety requirements, tickets, personas, user stories, or any name, but you have to do them to be able to work with the devs, and describe to your customer and users what you have actually developed.

the 'lots of things [that] cannot be' scare me as a junior engineer.

I feel like they are made by these shady companies that offer 2 interns and a junior, to get you a turnkey solution within 12 hours. It also gives me back bad memories of homework made at the last minute in uni, and I would never do that again. And as far as I saw in both cases, the resulting software is painful to use or to evolve afterwards.


> describe to your customer and users what you have actually developed.

In the domain I work in, what customers want (and what we provide) changes monthly at worst, annually at best. And in many cases, customers do not know what they want until they have already used some existing version, and is subject to continual revision as their understanding of their own goals evolves.

This is true for more or less all software used in "creative" fields.


I don't understand how this practice makes your modern code more reliable, sorry

I was replying to

>Are modern codebases with modern practices less buggy than the ones from 20 years ago?

I understood that @NayamAmarshe acknowledged about new practices and tools introduced after my examples, in the 80s, 90s, and early 2000s (mostly with agile everywhere, and v-methods becoming a red flag on a resume and in business meetings).

It seemed to be the essence of their question.

So all I was saying was that codes from back then where capable of being safe. Reliability wasn't invented by modern practices.

Modern practices have only changed the development process, as you mentioned. Not the safety. And if it did, it affected safety, as doing provably safe code with new practices is still being researched at the academic level. (check out the case of functional safety vs/with agile methods)

Can you explain how do you make your code less buggy, than a code from 20 years ago, with practices from back then ?


My point was that you cannot use the software development processes used in planes and transportation systems in every area of software development. Those processes are extremely reliant on a fully-determined specification, and these do not exist for all (maybe even most?) areas.

If you're inevitably locked into a cycle of evolving customer expectations and desires, it is extremely hard and possibly impossible to, for example, build a full coverage testing harness.


yup, 21st century practices for 21st century business needs

but they don't make the code less buggy per se. They just allow to patch it faster.


IMO yes. Software is a lot more reliable than it was 25 years ago. This boils down to:

1. Unit/regression testing, CI

2. Code reviews and code review tools that are good.

3. Much more use of garbage collected languages.

4. Crash reporting/analytics combined with online updates.

Desktop software back in the early/mid nineties was incredibly unreliable. When I was at school and they were teaching Win3.1 and MS Office we were told to save our work every few minutes and that "it crashed" would not be accepted as an excuse to not hand work in on time, because things crashed so often you were just expected to anticipate that and (manually) save files like mad.

Programming anything was a constant exercise in hitting segfaults (access violations to Windows devs), and crashes in binary blobs where you didn't have access to any of the code. It was expected that if you used an API wrong you'd just corrupt memory or get garbage pixels. Nothing did any logging, there were no exceptions, at best you might get a vague error code. A large chunk of debugging work back then would involve guessing what might be going wrong, or just randomly trying things until you were no longer hitting the bugs. There was no StackOverflow of course but even if there had been, you got so little useful information when something went wrong that you couldn't even ask useful questions most of the time. And bugs were considered more or less an immutable fact of life. There was often no good way to report bugs to the OS or tool vendors, and even if you did, the bad code would be out there for years so you'd need to work around it anyway.

These days it's really rare for software to just crash. I don't even remember the last time a mobile app crashed on me for example. Web apps don't crash really, although arguably that's because if anything goes wrong they just keep blindly ploughing forward regardless and if the result is nonsensical, no matter. Software is just drastically more robust and if crashes do get shipped the devs find out and they get fixed fast.


It improves velocity not code quality. You can achieve the same quality levels but making changes takes much more time.

Delivery costs of software is way down in many domains (SaaS teams frequently deliver dozens or hundreds of releases a day). That would not be possible without automated tests.


Is that not self evident? Yeah they're a pain in the ass but you need them if you're going to go refactoring around in the codebase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: