return to table of content

Ask HN: What's your "it's not stupid if it works" story?

sedatk
53 replies
14h6m

I created the most popular Turkish social platform, Eksi Sozluk, using a single plaintext file as its content database back in 1999. It had taken me only three hours to get it up and running without any web frameworks or anything. It was just an EXE written in Delphi. The platform's still up albeit running on .NET/MySQL now and getting banned by Erdogan government for baseless or false reasons (like "national security"). Despite being banned, it was the seventh most popular web site in Turkey two weeks ago, and the second most popular Turkish web site in the same list: https://x.com/ocalozyavuz/status/1735084095821000710?s=20

You can find its ancient source code from 1999 here: https://github.com/ssg/sozluk-cgi

The platform is currently at https://eksisozluk1999.com because its canonincal domain (https://eksisozluk.com) got banned. Any visitors from outside Turkey should get redirected anyway.

Since it's still a legal business entity in Turkey, it keeps paying taxes to Turkish government, and even honors content removal requests despite being banned. Its appeals for the bans are in the hands of The Consitutional Court to be reviewed for almost a year now.

A newspiece from when it was banned the first time this year: https://www.theguardian.com/world/2023/mar/01/eksi-sozluk-wh...

Its Wikipedia page: https://en.wikipedia.org/wiki/Ek%C5%9Fi_S%C3%B6zl%C3%BCk

HaZeust
11 replies
12h52m

Crazy to see you! Some time ago, I was actually looking to add Eksi to Touchbase (www.touchbase.id) since several users reached out and wanted to add it alongside their other platforms to share on their profile, but we couldn't find out the URL convention for user profile feeds! It seemed to be "https://eksisozluk1999.com/{{username}}--{{7 digit value}}", but we couldn't find any rhyme or reason to the 7 digits. Are the integers random, or do they even go back to stemming from a convention from the previous codebase?

sedatk
10 replies
11h58m

User profiles are actually stored like https://eksisozluk1999.com/biri/{{username}}. "/@{{username}}" also redirects to "/biri/{{username}}". You shouldn't need numbers at all. The numbers are only at the end of topic titles. They are title id's (sequential integers assigned when they're created) to disambiguate conflicting Latinized forms of Turkish words.

noduerme
7 replies
11h8m

back in 1999 or so, I wrote an online shopping site this way, all the data stored as text files (one per category, with many items in my case ... I was 18 years old and had no idea about databases). The site ran smoothly for almost a year until the customer used "*" in the name of a product... which was the character by which all the product data in the text files data was split...

isoprophlex
5 replies
10h38m

That's why you always delimit your data fields in a text file with a unicode snowman

Surely noone will ever use that character!

noduerme
3 replies
10h12m

live and learn. It was the re-split when they saved the new products through my brilliant parser that royally fucked it all up. Genius that I was, I used "|" to separate attributes, but I also definitely used a double asterisk to mean something else. Nothing teaches you not to get clever better than screaming customers and abject failure. And having to find/replace a thousand asterisks to figure out which ones were making the file unreadable. Falling on my face made me the careful coder I am today.

jwoq9118
2 replies
8h58m

Early career chap over here. Awesome hearing stories like this. Those wild west days certainly have passed. We’ve got so much now to get us started as programmers that it almost robs us of learning experiences somehow.

stavros
0 replies
7h25m

Those wild west days certainly have passed.

Not if you see some of the stuff my coworkers write.

noduerme
0 replies
5h36m

Hah. Well you always need to just learn new things. That's what my life taught me.

Check it out. Year is 1999 or so - [edit: scram that, more like 2001] and I'm working at a Starbucks on my laptop. Mind you, wifi does not exist. Having a color laptop is sort of posh. One other person who shows up there every day, this kid Jon who's my same age, he's got a laptop. We end up talking. No one even has a cell phone.

Jon's my age and he's writing PHP scripts. So am I. I have a client I built a website for that needs an online store - they sell custom baby blankets and car seat covers. They want a store where you can choose your interior fabric and exterior fabric for each item, and see a preview. They have 10 interior and 20 exterior fabrics. They sew these blankets by hand for each request, for like $100 each. This is a huge job at the time... it pays something like $4000 for me to write the store from scratch. (I'd easily charge $60,000 now for it). First I have to mock up 200 combinations in photoshop?... so instead I write a script that previews the exterior and interior fabrics. Then I write a back-end in PHP to let them upload each fabric and combine them.

One day I'm sitting at the next table to Jon (he was working on a game at the time, I think - fuck, who knows, we were both 18 year old drop outs) - and I showed him how I wrote these fabric combinations to text files. And he was like... "Dude, have you tried SQL? It's AMAZING!" And I was like, "what the fuck is SQL?"

Yes, people used to pay idiots like us to build their websites. I'm still sort of proud of a lot of shit I got to do back then. But I am thankful to Jon that he introduced me to SQL when I was at the time trying to invent databases from scratch with fopen('r') and fopen('w') and hand-built parsers ;)

[edit] Just one little thing I'd note my friend: If you have a brain, it's always the wild west. Those jobs that make you create something from scratch, they haven't evaporated. Sure, it helps to know newer technologies, but the more important thing is being sure you can do what they're asking for, and then figure out a way to do it. This is the hacker ethos.

arein3
0 replies
7h36m

You can also encode the special characters when writing to file and decoding after read.

lelanthran
0 replies
12m

Weird. In the same year (1999), I did pretty much the same thing (because strtok really made it easy to split a string) also for client input fields.

Only, I used the ASCII FS character (the Field Separator character) and everything worked brilliantly.

thesoursloth
1 replies
8h4m

additionally: if a nickname has spaces, we have to type "%20" instead of spaces in the links with "/@{{username}}"

i've submitted an entry about this a few minutes ago. https://eksisozluk1999.com/entry/143247963

thesoursloth
0 replies
7h3m

*months, not minutes. sorry for autocorrect (i wasn't using english keyboard.)

pjot
10 replies
13h41m

Using Apple’s translate function I was able to read many of the posts - very interesting to see the differences between American and Turkish social media.

There were many posts about cats and their livelihoods and protection. Love that

throwup238
7 replies
13h26m

> There were many posts about cats and their livelihoods and protection. Love that

Turks have a wonderful relationship with cats, especially in Istanbul: https://en.wikipedia.org/wiki/Feral_cats_in_Istanbul

There is a nationwide no-catch and no-kill policy for feral cats.

philwelch
3 replies
11h33m

There’s a theory that cats mostly domesticated themselves; human settlements and their large grain stores proved to be a reliable source of rodents for them to hunt, and the humans tolerated the cats because they kept the rodent problem in check, but these cats would have lived a semi-domesticated lifestyle around human settlements without initially being kept as household pets. Maybe the feral cats of Istanbul are the closest modern approximation to this.

hutzlibu
1 replies
8h44m

When I was a kid, that still used to be the norm on farms. There were farm cats and house cats. The farm cats were there to kill rodents and otherwise minded their own buisness. You could not just take them up, they would have bitten you. I think this has gotten out of style, as I have not seen the division nowdays and all cats seem to be gotten tame.

philwelch
0 replies
3h0m

Barn cats are still a thing, but they are typically still owned and kept whereas I was talking more about free roaming cats that live around human settlements. The early free roaming cats would have been about as tame as barn cats; my impression is that the cats of Istanbul are more friendly.

jschrf
0 replies
10h49m

Definitely in line with the axiom of sits where fits

explaininjs
2 replies
8h36m

Would love to see that implemented here in USA. People in places like NYC love to catch and spay every cat they see, then go on to complain about too many rodents around.

op00to
1 replies
5h7m

Rodents are an issue of trash and food left around, not a problem of not enough cats.

Cats shit in my garden and leave dead songbirds where I grow food. No, we don’t need more cats.

pjot
0 replies
2h49m

Do you not think rodents are in your garden … where food is left around?

sedatk
1 replies
13h30m

I'm very glad to hear that it's readable using a translator!

In fact, the community dynamics resemble Reddit a lot despite having significant differences in layout and format. Irony, sarcasm, harsh criticism are common yet tolerance of differing viewpoints is relatively high compared to other platforms where people just flock to their own bubble or just block everyone else who they don't agree with.

It's fun too, has a rich history spanning a quarter century, and has been quite influential.

prox
0 replies
8h18m

Don’t forget to archive it with responsible parties, like for future history and anthropological research. It would be a shame to loose so much of public discourse, especially if it’s so influential.

mavili
7 replies
11h42m

Not completely baseless reasons if you don't have any meaningful moderation on the platform. You're quick to blame your own gov at first convenience when you don't like the fact that your system may be harming society. All governments try to take measures to control spread of misinformation, Europeans and Americans do it by forcing social media platforms to silence opposing views by labelling them "misinformation", and Türkiye does the same. No difference.

sedatk
4 replies
11h22m

Among the millions of entries on the platform, not one single piece of content on the platform was presented as evidence for the ban decisions. Just ambiguous words or false claims.

Shouldn't it be straightforward to prove that Eksi Sozluk lacks "meaningful moderation"? Shouldn't it be a requirement for such a drastic action like banning the whole web site?

Twitter produces orders of magnitude more disinformation in volume, amplified way faster and way broader too, yet they don't get any ban from Turkey whatsoever. How do you explain this kind of double standard?

mavili
3 replies
6h56m

It should be pretty straightforward for you to show you have any moderation whatsoever, I dont believe you do. The whole of the site is full of rubbish.

If I were the Turkish government I would never even do the favour by banning the site, cause that draws attention the site doesn't deserve. I dont care if it's most visited site whatever, it's just useless.

wholinator2
1 replies
5h28m

Well, whenever we enter an evidence war like this we must go back to the old standard. The burden of proof is on the accuser. It has to be. If the burden of proof is always on the defendant, all you need is 30 people making accusations and it becomes impossible to defend against. It's basically a legal DDOS.

Also, i think every logical person can see that its much much easier to provide a single example of a lack of moderation than it is nebulous "prove you have moderation". What kind of standard do you have for that and how do we know you're not going to shift the goalpost the moment they bring you what you ask for. An example is an example is an example. Provide your proof or cease accusations. I've seen this argument many times in my country, always used to shut down free expression and enforce repression. There's great books and videos on logical fallacies out there

mavili
0 replies
3h45m

Lol no need for an essay, I didnt mean the service provider has to prove they have moderation to the officials. I meant just here, it would be easy to just say the site has moderation. I don't believe there is, which means it's a dumpster ground with everyone posting all sorts of trash. Which, by the way, is another reason the site shouldn't even warrant any attention, but obviously government officials are stupid to even bother.

francocalvo
0 replies
5h25m

Reframing what you just said, you think people should prove they are innocent against any accusation because it would be "pretty straightforward"?

Either you like authoritarian governments or you have it in for this website (or both?)

yard2010
0 replies
9h41m

Don't try to normalize Aladdin by saying everything he does is Aladdin. He is such an Aladdin you can't defend him by saying he's Aladdin!!

callalex
0 replies
10h40m

Got any problematic examples?

mattl
6 replies
13h9m

How did you make a Windows executable work on the web?

sedatk
3 replies
13h2m

Using CGI protocol on a Windows server. IIS (Windows' own web server) basically interfaces with executables by running them, feeding them HTTP headers, and server variables through environment variables, and gets the response HTTP headers and the body from their STDOUT. It's very inefficient, of course, since every request requires spawning a new copy of the executable, but it had worked fine in its first months :)

Here is a very simple example from the original sources: https://github.com/ssg/sozluk-cgi/blob/master/hede.pas

dijit
2 replies
8h18m

Don't sell yourself too short here, that's exactly how Perl/PHP works and that was defacto standard around the same vintage (and for a decade more).

KronisLV
1 replies
6h3m

Honestly, there's a lot of beauty in that simplicity, I can definitely imagine someone also wanting to work with mod_php in Apache as well (just a module for the web server).

That said, FastCGI and similar technologies were inevitable and something like PHP-FPM isn't much more difficult to actually run in practice.

Still, having a clearly defined request lifecycle in wonderful, especially when compared to something how Java application servers like Tomcat/Glassfish used to work with Servlets - things there have gotten better and easier too, but still...

whstl
0 replies
2h6m

Agree. I also loved the simplicity. It’s not that different from Serverless, if you look at it.

There is an HTTP server handling all the HTTP stuff and process launching (which is handled by API Gateway in AWS, for example), and the communication between it and the “script” just uses language or OS primitives instead of more complex APIs.

The 2000s were quite wild in how things changed… suddenly you have giant frameworks that also parse HTTP, a reverse proxy. At some point even PHP became all about frameworks.

I wonder if we wouldn’t have a more open, standardized and mature version of CGI/Serverless if it had been a more gradual transition rather than a couple of very abrupt paradigm shifts.

franzb
1 replies
13h3m

I imagine it ran server-side (on Windows).

bruce511
0 replies
12h0m

Indeed. I think it's worth going a little deeper for those who perhaps aren't familiar with some of the underlying principles of the Web.

For starters, all the program does is receive requests (as text) over a TCP/IP connection. It replies over the same connection.

So writing a Web server in any language, on any OS is a trivial exercise. (Once you have the ability to read or write TCP/IP.

The program has to accept the input, calculate the output, and send the output.

If the input is just file names, then the program just reads the file and sends it. (Think static site).

The program may parse the file, and process it some more. It "interprets" code inside the file, executes it, and thus transforms the output. Think PHP.

In these cases a generic server fits the bill. Think Apache, IIS, nginx and so on.

The next level up are programs that are compiled. They generate the output on the fly, often with no, or little, disk interaction. This sort of program often uses a database, but might not. (An online Soduku game for example might do everything in memory.)

Again, any if the above can be built on any OS and written in any language with TCP support.

yard2010
3 replies
9h43m

Why do dictators love to ruin old stuff?

stavros
1 replies
7h23m

I don't know, why do dictators love to ruin old stuff?

OJFord
0 replies
6h4m

Because they think they oughtacrack.

(That's the best I've got, clearly need more crackers.)

plemer
0 replies
7h23m

“Who controls the past controls the future”

system2
3 replies
12h33m

Is there a reason why they are not taking the 1999 version of the domain down?

sedatk
2 replies
11h56m

Because the platform switched to it only last week, no other reason. It was on eksisozluk1923.com before that. The moment this new domain catches up on popularity, they would find an arbitrary reason to ban that too.

system2
1 replies
11h29m

Let's hope things change after 2028. Optum kardesim.

yard2010
0 replies
9h40m

I don't think that "elections" change the result of dictatorship

whstl
2 replies
12h45m

Oh, I did something similar. I built quite popular local (non-english language) gaming forum with an Access file hosted in a Windows server and a VBScript ASP file, which had just been released. That's the original version, before ASP.NET. I was 13 or 14 years old at the time and didn't know better. It was no SQLite, so I had some weird concurrency problems. On top of that I ran into the some size limit (was it 2GB?) pretty quickly, but at this point it was time to look for a bigger server and figure out real databases anyway.

It eventually stopped being popular under my administration, so I transferred the domain to some people around 1999. It was rebuilt with PHPBB or something and got a new life. It's still on, surprisingly.

sedatk
1 replies
12h1m

Fascinating that our stories intersect so much. I later converted that Delphi code to ASP/VBScript because native Delphi code ran really slow on a new DEC Alpha AXP server because of emulation on the RISC architecture. ASP code was much faster despite being interpreted :) I found ASP way more practical too. Access was also my native next choice of database. Not very scalable, but day and night difference compared to a text file :)

whstl
0 replies
3h35m

I never really stopped to think about it, but ASP was indeed quite performant, considering it was all interpreted, running in late 90s shared-hosting hardware with very little RAM and super slow hard disks. The site got a few thousand active users and worked quite well, apart from the DB size limits.

Fast forward 10, 20, almost 30 years and I frequently encounter websites that struggle to work under the same load, even with expensive AWS bills, especially when working with Rails.

Perhaps ASP was performant because the site was a few orders of magnitudes smaller than anything you'd see today, even though it was full featured. Probably 1000x or 10000x smaller if I also include third-party libraries in the count. It was quite comparable to serverless/edge computing actually.

weinzierl
1 replies
9h17m

It should not have to be said, but (especially in in the West) we tend to forget about it:

Turkey has more inhabitants than the most populous country in western Europe (Germany). Turkey is also significantly larger than the largest country in western Europe (France).

When it comes to the number of Internet users it is on par with Germany and beats all other western European countries.

sedatk
0 replies
8h38m

True. I think the number of Internet users in Turkey has surpassed 70 million. Eksi Sozluk used to receive 30+ million unique visitors monthly before it got banned.

pektezol
0 replies
10h46m

Çok iyi :)

LeonB
0 replies
7h11m

Sedat, you’re a legend, and a machine, great to see you here or anywhere. Good luck with the legal challenges.

sowbug
27 replies
11h49m

Not my idea or implementation.

Our startup built a plugin for Microsoft Outlook. It was successful, and customers wanted the same thing but for Outlook Express. Unfortunately, OE had no plugin architecture. But Windows has Windows hooks and DLL injection. So we were able to build a macro-like system that clicked here and dragged there and did what we needed it to. The only problem was that you could see all the actions happening on the screen. It worked perfectly, but the flickering looked awful.

At lunch, someone joked that we just had to convince OE users not to look at the screen while our product did its thing. We all laughed, then paused. We looked around at each other and said "no, that can't work."

That afternoon someone coded up a routine to screenshot the entire desktop, display the screenshot full-screen, do our GUI manipulations, wait for the event loop to drain so that we knew OE had updated, and then kill the full-screen overlay. Since the overlay was a screenshot of the screen, it shouldn't have been noticable.

It totally worked. The flickering was gone. We shipped the OE version with the overlay hiding the GUI updates. Users loved the product.

xxs
14 replies
10h1m

The thing (screenshot and all) was routine done in the 90s with all those fancy non-rectangular applications/demos/launchers [usually available on CDs that came w/ magazines]. They had transparent/alpha zones that copied the screen under them.

anthk
5 replies
8h37m

Back in the day terminals under Linux did that to fake transparency: they just copied a chunk of the root window (where you put some wallpaper) as the terminal background and applied an alpha layer if the user wanted a translucid color.

onli
4 replies
8h17m

Fake transparency is still a thing, for desktops without compositor. Not only in terminals, but also in docks.

anthk
3 replies
7h36m

But docks work in a different way, such as XRender or just a PNG with an alpha layer. Much less expensive than copying a chunk of the backdrop and then "pasted" at your terminal background.

Often docks will be able to show you the underlying windows behind it, even without compositing. But under the terminal with fake transparency you couldn't see another window behind it, just the backdrop part.

I think X11 had some extensions to allow translucency too for software like oneko, xroach or such.

stavros
1 replies
7h31m

Why is it less expensive? Sounds like the exact same operation, except the OS is doing it.

anthk
0 replies
5h13m

No. XRender worked in a different way than ATerm/Eterm.

onli
0 replies
4h53m

From my experience this is wrong. If you set a transparent background in wxwidgets/gtk it will show you a grey background. And gtk3 even removed the ability to query the x background pixmap. So if there is better support for this at an X level, this is not available above.

See https://github.com/onli/simdock/issues/11, and if there really are good alternative solutions I'd be happy for some help.

TonyTrapp
5 replies
8h13m

Maybe some incompetent people did it that way, but 1-bit transparency was very well possible with native Windows APIs in the 90s (see SetWindowRgn). Later on (starting with Windows 2000 IIRC) it was also possible to have semi-transparent (alpha-blended) regions.

constantly
1 replies
7h21m

In a thread about “if it’s not stupid if it works” you accuse people of doing something “incompetent” by doing something ostensibly “stupid but works” but totally miss on what was even possible “non-stupidly.” There feels like some form of irony here.

TonyTrapp
0 replies
6h55m

Yes, obviously there a grain of irony in the reply. ;)

timbaboon
0 replies
7h18m

It's not stupid if it works...

stormking
0 replies
7h52m

Please educate yourself before you accuse people of incompetency. Of course it was about (pseudo) alpha blending because "smooth shadows" around everything became very popular in the late 90s.

firebot
0 replies
7h35m

Not exactly. With 2000/XP you can set the entire window opacity. Still no gdi+ native. So regions were still just shapes(1bit mask). Trying to set a pre-multiplied bitmap to a window will just give you a brightened version(the multiplied version).

Though there was some support for cursor shadow at this point, and the improved GDI heap really helped, making faking a window drop shadow actually pretty feasible. Vista and Aero is the first native support for Windows with alpha channels.

I actually liked Vista. It worked just fine and wasn't unnecessarily reorganized. Plus the improvements to the OS threading model were excellent, which is why 7 is so incredibly rock solid, probably peak Windows.

Maxion
1 replies
8h47m

Ahh so THAT's how they did that!

firebot
0 replies
8h1m

It depends. Windows can be set to shapes going back to like Windows 95, iirc.

But alpha, specifically, would be faked at that point. Windows 2000 supported alpha for such things in a basic way, like XP. Vista with Aero and 7 then really expanded the themes and the window compositing.

stormking
6 replies
9h57m

That's called double-buffering.

hutzlibu
4 replies
8h50m

I think it was meant as a joke, so for anyone else reading it, no it is not actually double-buffering.

stormking
2 replies
7h49m

Half a joke, because the concept is very much the same. You "paint" to an invisible buffer and then you swap.

hutzlibu
1 replies
7h42m

Yeah, but the concept of double buffering is to swap every frame, for performance gains.

Here nothing gets swapped, and just temporary the screen gets hidden. So vaguely similar ... but not very much the same in my opinion.

mycall
0 replies
3h20m

freeze-frame buffering

thebruce87m
0 replies
8h48m

double-bluffering?

givemeethekeys
0 replies
9h28m

They invented double-buffering! :)

nonfamous
1 replies
2h32m

Early versions of iOS did this too. You’d tap an app icon and the app would appear instantly in its prior state … but it was just a saved screenshot that iOS displayed while the app was actually loading.

MBCook
0 replies
25m

Really it still does, in a way. The ”apps” in the app switcher work that way. Even if the app is still live in memory, it’s not rendering to the switcher all the time. And if it was killed, then the pic is all that’s left.

zerr
0 replies
10h28m

You could just overlay the Outlook window.

rewgs
0 replies
3h7m

I absolutely love this.

faloppad
0 replies
10h30m

Love it, lol

fxtentacle
23 replies
15h28m

We have a production service running for years that just mmaps an entire SSD and casts the pointer to the desired C++ data structure.

That SSD doesn't even have a file system on it, instead it directly stores one monstrous struct array filled with data. There's also no recovery, if the SSD breaks you need to recover all data from a backup.

But it works and it's mind-boggingly fast and cheap.

whartung
9 replies
15h8m

I've always wanted a Smalltalk VM that did this.

Eternally persistent VM, without having to "save". It just "lives". Go ahead, map a 10GB or 100GB file to the VM and go at it. Imagine your entire email history (everyone seems to have large email histories) in the "email array", all as ST objects. Just as an example.

Is that "good"? I dunno. But, simply, there is no impedance mismatch. There's no persistence layer, your entire heap is simply mmap'd into a blob of storage with some lightweight flushing mechanic.

Obviously it's not that simple, there's all sorts of caveats.

It just feels like it should be that simple, and we've had the tech to do this since forever. It doesn't even have to be blistering fast, simply "usable".

tonyarkles
3 replies
11h49m

That is so wonderfully fascinating to me. You could just download a file into a variable and when that variable goes out of scope/has no more references it’d just be automatically “deleted”. Since there’s no longer a concrete “thing” called a file, you can organize them however you want and with whatever “metadata” you want by having a dict with the metadata you want and some convention like :file as the key that points to the body. Arbitrary indexes too; any number of data structures could all share a reference to the same variable.

Simple databases are just made up of collections of objects. Foreign key constraints? Just make the instance variable type a non-nullable type. Indexes? Lists of tuples that point to the objects. More complex databases and queries can provide a set of functions as an API. You can write queries in SQL or you can just provide a map/filter/reduce function with the predicate written in normal code. Graph databases too: you can just run Dijkstra’s algorithm or TSP or whatever directly on a rich persistent data structure.

Thanks for the neat idea to riff on. I like it! Thinking about it in practice makes me a little anxious, but the theory is beautiful.

mcherm
2 replies
5h52m

So, I've occasionally played around with a language that pretty nearly does this.

Mumps is a language developed in 1967, and it is still in use in a few places including the company where I work.

The language is old enough that the first version of it has "if" but no "else". When they added "else" later on it was via a hack worthy of this post: the "if" statement simply set a global variable and the new "else" statement checked that. As a result, "if-else" worked fine but only so long as you don't use another "if" nested within the first "if" clause (since that would clobber the global variable). That was "good enough" and now 50 years later you still can't nest "if" statements without breaking "else".

But this very old language had one brilliant idea: persistence that works very much the way you describe. Any variable whose name begins with "^" is persisted -- it is like a global variable which is global, not just to this routine but to all of the times we execute the program.

It is typical to create single variables that contain a large structure (eg: a huge list with an entry for each customer, indexed by their ID, where the entry contains all sorts of data about the customer); we call these "tables" because they work very much like DB tables but are vastly simpler to access. There's no "loading" or impedance mismatch... just refer to a variable.

Interestingly, the actual implementation in modern day uses a database underneath, and we DO play with things like the commit policy on the database for performance optimization. So in practice the implementation isn't as simple as what you imply.

govg
1 replies
4h1m

That global persistence model across executions is very fascinating. If you don't mind, could you explain what line of work this is and how it helps the use case? I have encountered similar concepts at my old job in a bank, where programs could save global variables in "containers" (predates docker IIRC) and then other programs could access this.

vdqtp3
0 replies
9m
MBCook
1 replies
14h32m

Isn’t that sort of the original idea for how Forth would work? Everything is just one big memory space and you do whatever you need?

I’m going from very hazy memory here.

_0ffh
0 replies
6h8m

I think it is, although you have to manually save the current image if you want to keep the changes you made. Which I find entirely reasonable.

I also think that what gp is looking for is Scratch. IIRC it's a complete graphical Smalltalk environment where everything is saved in one big image file. You change some function, it stays changed.

lproven
0 replies
4h17m

This is what Intel Optane should have given us.

Non-volatile memory right in the CPU memory map. No "drives", no "controllers", no file allocation tables or lookup lists or inodes. Save to memory <16GB, say, it's volatile: that's for fast-changing variables. Save to memory >16GB and it's there even through reboots.

jdougan
0 replies
13h33m

Arguably you could use GemStone/S like that, though it's probably not the kind of capabilities you want.

gregw2
0 replies
3h9m

It’s not Smalltalk but you might find OS/400 interesting for having a single level store for object persistence.

Old HN discussion with Wikipedia pointers: https://news.ycombinator.com/item?id=18907798

brysonreece
6 replies
15h16m

Wow. How do design decisions get made that result in these types of situations in the first place?

taneq
1 replies
13h20m

Someone says "hey if we had 900GB of RAM we could make a lot of money" and then someone else says "that's ridiculous and impossib- hang on a minute" and scurries off to hack together some tech heresy over their lunch break.

speedgoose
0 replies
11h0m

By the way, you can find single servers with 32TB of ram nowadays.

MBCook
1 replies
14h29m

Honestly it’s not too far off from what many databases do if they can. They manage one giant file as if it’s their own personal drive of memory and ignore the concept of a filesystem completely.

Obviously that breaks down when you need to span multiple disks, but conceptually it really is quite simple. A lot of the other stuff file systems do are to help keep things consistent. But if there’s only one “file“ and you don’t ever need metadata then you don’t really need that.

Very smart solution really.

jasonwatkinspdx
0 replies
12h41m

Yeah, a lot of database storage engines use O_DIRECT because the OS's general purpose cache heuristics are inferior vs them doing their own buffer pool management. That said if you try this naively you're likely to end up doing something a lot worse than the Linux kernel.

username135
0 replies
15h12m

If I had to guess:

Doing it this way = $

Doing it that way = $$$

_zoltan_
0 replies
13h25m

it's a very reasonable thing to do if you need performance.

nurettin
0 replies
12h57m

I do similar things with mmap and dumping raw structs to get insane speeds one wouldn't expect to get from traditional databases.

Perhaps you could even pause the operations, snapshot with dd and resume everything back in order to get a backup.

lepisma
0 replies
13h23m

Very interesting. Can you give a sense of the speed up factor?

jasonwatkinspdx
0 replies
12h43m

LMDB has a mode designed to do something similar, if anyone wants something like this with just a bit more structure to it like transactional updates via CoW and garbage collection of old versions. It's single writer via a lock but readers are lock/coordination free. A long running read transaction can delay garbage collection however.

anthk
0 replies
8h24m

Linux had methods to avoid fysnc on filesystems and if you used an SSD and a SAI you would usually have no problems. Pixar used that to write GB's of renders and media for instance.

Shorel
0 replies
1h45m

I love this one.

If anyone from AWS reads your comment, they could have an idea for a new "product" xD

MaxBarraclough
0 replies
42m

Sounds fragile, C++ compilers are permitted to do struct padding essentially as they please. A change in compiler could break the SSD<-->struct mapping (i.e. the member offsets).

C++ arrays, on the other hand, are guaranteed not to have padding. That's essentially what memory-mapped IO gives you out of the box.

https://stackoverflow.com/a/5398498

http://www.catb.org/esr/structure-packing/#_structure_alignm...

MBCook
14 replies
14h39m

I used to work at a small company. We had a few remote embedded devices that did work and sent data back to the mothership over the internet. Their firmware could be remotely updated, but we were always very careful.

Well one day a mistake was finally made. Some of the devices went into a sort of loop. They’d start running the important process, something would go wrong, and they’d just retry every few minutes.

We caught the issue almost instantly since we were watching the deploy, and were able to stop updates before any other devices picked it up. But those that already got it were down.

We could ask the devices to send us the output of a command remotely, but it was too limited to be able to send back an error log. We didn’t have time to send back like 255 characters at a time or whatever, we needed to get it fixed ASAP.

And that’s when the genius stupid hack was suggested. While we couldn’t send up a full log file, we could certainly send up something the length of a URL. So what if we sent down a little command to stick the log on PasteBin and send up the URL to us?

Worked like a charm. We could identify what was going wrong and fix it in just a few minutes. After some quick (but VERY thorough) testing everything was fixed.

stephenr
8 replies
12h58m

Your company had remote embedded devices but didn't keep one "locally" for debugging issues?

bongodongobob
7 replies
11h43m

There's a zillion things that you can't necessarily test for locally. When you have a fleet of IoT devices deployed in other people's environments, there's literally no way to test everything.

Your question comes down to "Why didn't you just deploy bug free code using perfect processes? That's what I always do."

I mean, cmon.

stephenr
6 replies
10h27m

I never said they should test for everything.

OP's description suggests it was an error that's common to all the deployed instances that received the update, rather than some specific combination of environment and that deployment.

It would have allowed them to run the same deployment locally and use physical access (serial, a display, sd card, whatever) to capture the error log.

What they came up with is clever but it's very surprising that they needed it, especially given that they have very limited remote access to the units that are in the wild.

bongodongobob
2 replies
10h9m

No, they said some.

stephenr
1 replies
6h57m

Emphasis is mine:

We caught the issue almost instantly since we were watching the deploy, and were able to stop updates before any other devices picked it up. But those that already got it were down.
MBCook
0 replies
1h40m

You’re correct. I think every box that got it started having problems (or at least most), the only reason any were still up is that updates were scattered in case of think kind of incident (and to avoid hammering out poor little server).

MBCook
2 replies
1h43m

After all this time I don’t remember what the bug was. We did have boxes locally that we tested on of course. But somehow this got out.

It might’ve been something that only showed up under certain configurations. It might’ve been something that just should have been caught under normal circumstances and wasn’t for some reason. It may have been something that worked fine tested off-hardware until some bug in packaging things slightly changed it and broke a script. Or it could’ve been a case of “this is trivial, I know what I’m doing, it will be fine“.

We were a very small operation so I’m not going to say that we had an amazing QC process. It may have been a very human mistake.

stephenr
1 replies
1h1m

I know what I’m doing, it will be fine

I don't think anyone is truly a programmer until they've learnt the hard way the outcome of "what could go wrong"!

Thanks for the update, happy holidays mate!

MBCook
0 replies
13m

Sure! I’ve learned that lesson the hard way a couple of times myself.

kr0bat
4 replies
14h6m

How you could have enough control over the machine to reroute the error log to (what I assume was) a Pastebin api, while also lacking access to any of the files on the machine? In my mind you'd be required to ssh into the machine to upload, and if you're ssh'd in, why not just cat the log?

zimpenfish
0 replies
10h24m

I was doing some proxy soak testing for a company once where we had to run the tests from the server room but there was no non-proxy connectivity from that room to where we were monitoring the tests. Simple solution: output the progress to Dropbox, watch the same file upstairs. Bit of delay, sure, but better than having no idea how things are going until the 30-60min test is done (and no, we weren't allowed to sit in the server room watching it.)

lelanthran
0 replies
12h54m

In my mind you'd be required to ssh into the machine to upload, and if you're ssh'd in, why not just cat the log?

Ssh on remote IoT class devices is works. The problem is rarely ssh, the problem is always some form of key management plus NATs in-between.

If you've got a few thousand devices in the field, public key management can become a a real pain, especially when you want to revoke keys.

iimblack
0 replies
12h31m

I’ve worked at a company where our remote access was over a super slow modem line but the machine did have access to the internet.

MBCook
0 replies
13h33m

Good question! We couldn’t SSH in, which is too bad this would all be trivial. We had no direct access to the boxes, they were often behind firewalls. In fact that was the suggested placement for security reasons. They weren’t full servers, just little embedded things.

We had a little HTTP API that it was always talking to. It would call the API to send data back to us or just check in regularly, and we would return to it a little bit of status stuff like the current time to keep clocks in sync, and a list of which “commands” they need to run.

Mostly the commands were things like “your calibration data is out of data, pull an update“ or “a firmware update is available“.

But one of them let us run arbitrary shell commands. The system was very limited. I wasn’t a developer directly on the project but I think it was just our custom software plus busy box and a handful of other things our normal shell scripts used. I assume it had been added after some previous incident.

I believe the basic idea was that during troubleshooting you could tell a box to return the output of “cat /etc/resolv.conf” or something else that we hadn’t preplanned for without having to send someone into the field. But since it was only for small things like that it couldn’t return a full file.

Luckily one of the commands was either curl or wget. So we could send down “curl -whatever /log/path https://pastebin/upload” or whatever it was. I don’t remember if we signed up for a pastebin account so we knew where it would show up or if we had it return URL to us in the output of the curl command.

This suggestion was literally a joke. We were all beating our heads against the wall trying to help and someone just said “why don’t we just stick it on pastebin“ out of frustration, and the developer on the project realized we had what we needed to do that and it would work.

rented_mule
12 replies
22h58m

15+ years ago, I was working on indexing gigabytes of text on a mobile CPU (before smart phones caused massive investment in such CPUs). Word normalization logic (e.g., sky/skies/sky's -> sky) was very slow, so I used a cache, which sped it up immensely. Conceptually the cache looked like {"sky": "sky", "skies": "sky", "sky's": "sky", "cats": "cat", ...}.

I needed cache eviction logic as there was only 1 MB of RAM available to the indexer, and most of that was used by the library that parsed the input format. The initial version of that logic cleared the entire cache when it hit a certain number of entries, just as a placeholder. When I got around to adding some LRU eviction logic, it became faster on our desktop simulator, but far slower on the embedded device (slower than with no word cache at all). I tried several different "smart" eviction strategies. All of them were faster on the desktop and slower on the device. The disconnect came down to CPU cache (not word cache) size / strategy differences between the desktop and mobile CPUs - that was fun to diagnose!

We ended up shipping the "dumb" eviction logic because it was so much faster in practice. The eviction function was only two lines of code plus a large comment explaining all the above and saying something to the effect of "yes, this looks dumb, but test speed on the target device when making it smarter."

o11c
7 replies
13h28m

... how does doing a full string dict lookup take less time than just checking a few trailing characters in a trie? For indexing it's okay to be aggressive since you can check again for the actual matches.

Exoristos
5 replies
12h16m

since you can check again for the actual matches.

Can you explain this?

o11c
4 replies
11h59m

An aggressive stemmer might stem both "generic" and "general" to "gener".

Then if your query is "what documents contain 'generic'?", you look in the index for "gener" and then open each of those documents and check if it actually has "generic" using a stricter stemmer (that accepts generic{,s}, genericness{,es}, genericit{y,ies}, generically ... this is a bit of a bad example since they all have the prefix directly). The cost is acceptable as long as both words have about the same frequency so it doesn't affect the big O.

Of course if you have any decent kind of compute, you can hard-code exceptions before building the index (which does mean you have to rebuild the index if your exception list changes ... or at least, the part of the index for the specific part of the trie whose exception lists changed - you don't do a global lookup!) to do less work at query time. But regardless, you still have to do some of this at query time to handle nasty cases like lie/lay/laid (searching for "lie" should not return "laid" or vice versa, but "lay" should match both of the others) or do/does/doe (a more obviously unrelated example).

rented_mule
3 replies
11h38m

and then open each of those documents

That alone ruled out doing anything like this on the device I'm talking about. The goal, which we reached, was to be able to search 1,000 documents in 5 seconds. Opening a document took nearly a second given the speed of our storage (a few KB/s). The search itself took about a second, and then we'd open up just enough of the documents to construct search result snippets as you paged through them.

zo1
2 replies
10h11m

Gosh this story makes me lament the state of our field.

If the current gen of devs were to build this, it would all be done "on the cloud" where they can just throw compute at the problem, and as long as the cost was less than 5$ per month they wouldn't care. That's the problem of the product managers, marketing execs and VCs.

uxp8u61q
0 replies
7h6m

This lament is about as interesting as complaining about kids not knowing how to use rotary phones.

rented_mule
0 replies
8h25m

I know exactly what you're talking about. The product manager on the project described above added little value. Luckily, they were so ineffective that they didn't get in the way often. I've had others who were "excellent" at getting in the way.

That said, three of the most impressive people I've ever known are a former marketing exec and two former product managers, all of whom now work in VC. In their former roles, each helped me be the best engineer I could be. The people in their current VC portfolios are lucky to have them as advisors. What makes them so good is that they bring expertise worth listening to, and they clearly value the technical expertise brought by engineers. The result is fantastic collaboration.

They are far from typical, but there are truly great ones out there. Losing hope of that might make it more difficult to be aware of the good fortune of working with one, and maximizing the experience. My hope is that every engineer gets at least one such experience in their career. I was lucky enough to experience it repeatedly, working with at least one great one for about half of my 30-year career.

rented_mule
0 replies
11h55m

We used a JIT-less subset of Java 1.4 on that device. Hashing of word-length strings in the underlying optimized C code was extremely fast and CPU cache friendly (and came with the JVM). With the simple cache in place, indexing time was dominated by the libraries that extracted the text from the supported formats. So, in line with this Ask HN's topic, it was good enough. And less code to maintain. And easier for engineers after me to understand. A good tradeoff overall.

More technical details for the curious...

Earlier I had done a quick trie implementation for other purposes in that code, but abandoned it. The problem is that we had to index (and search) large amounts of content in many different languages, including Chinese and Japanese with full Unicode support. This means that there is such large potential fan-out / sparsity within the trie that you need faster lookups / denser storage at each node in the trie (a hash map or binary search or ...). In that situation, a trie can be much slower than a single hash map with short strings as keys. Especially in a JIT-less JVM (the same code had to run server-side, where native extensions weren't allowed). If we were only dealing with ASCII, then maybe. And there would also be more complexity to maintain for decades (you can still buy newer versions of the device today that are running the same indexing and search code).

All those languages were also the reason that normalization needed caching. In v1, we were English only. I hand rolled a "good enough" normalizer that was simple / fast enough to not need caching. In v2 we went international as described above. I wasn't capable of hand rolling anything beyond English. So we brought in Lucene's tokenizers/stemmers (including for English, which was much more accurate than mine). Many of the stemmers were written in Snowball and the resulting Java code was very slow on the device.

dharmab
1 replies
14h55m

Similarly, a modder recently found that unrolling loops _hurt_ performance on the N64 because of RAM bus contention: https://www.youtube.com/watch?v=t_rzYnXEQlE

pclmulqdq
0 replies
13h32m

Unrolled loops can also often hurt for the same reason on big server chips. It's not always clearly good to unroll your loops.

fritzo
0 replies
15h31m

Those are my favorite functions! Two lines of code with a page of text explaining why it works.

Agentlien
0 replies
8h33m

This reminds me of something I encountered when working on surgical training simulators about ten years ago.

There was a function which needed to traverse a large (a few million vertices) mesh and, for each vertex, adjust its position to minimise some measurement.

The original code, written by a colleague, just calculated in which direction to move it and then, in a loop, made changes of decreasing magnitude until it got close enough.

This function was part of a performance bottleneck we had to solve, so I asked my colleague why he hadn't solved it analytically. He shrugged and said he hadn't bothered because this worked.

So, I rewrote it, calculating the exact change needed and removing the loop. My code took twice as long. After analysing why, I realised with his heuristic most triangles required only 1 iteration and only a handful required at most 3. This was less work than the analytical solution which required a bunch of math including a square root.

rudasn
10 replies
1d5h

Launching a headless browser just to generate some PDFs.

Turns out, if you want to turn html+css into pdfs quickly, doing via a browser engine is a "works really well" story.

vgalin
1 replies
1d4h

I wrote a Python package [1] that does something similar! It allows the generation of images from HTML+CSS strings or files (or even other files like SVGs) and could probably handle PDF generation too. It uses the headless version of Chrome/Chromium or Edge behind the scenes.

Writing this package made me realize that even big projects (such as Chromium) sometimes have features that just don't work. Edge headless wouldn't let you take screenshots up until recently, and I still encountered issues with Firefox last time I tried to add support for it in the package. I also stumbled upon weird behaviors of Chrome CDP when trying to implement an alternative to using the headless mode, and these issues eventually fixed themselves after some Chrome updates.

[1] https://github.com/vgalin/html2image

rudasn
0 replies
1d3h

Yeah it's the same concept, instead of .screenshot you do .pdf in pupetteer.

But with pdfs the money is on getting those headers and footers consistent and on every page, so you do need some handcrafted html and print styling for that (hint: the answer is tables).

unnouinceput
0 replies
8h17m

I had Chromium component added to a project just to show the users the help file which was a giant PDF document. The PDF file was from a 3rd part vendor who didn't know better/refused to change the system so we had to show it "as is" to the users. Any PDF reader component we tried failed because the PDF file had some crappy features in it that none of those component knew how to parse. Chromium engine, for its hate that gets nowadays, had no problem with any of those PDF files.

quyse
0 replies
1d5h

I've implemented recently just the same thing, but for SVG -> PNG conversion. I found that SVG rendering support is crap in every conversion tool and library I've tried. Apparently even Chrome has some basic features missing, when doing text on path for example. So far Selenium + headless Firefox performs the best ¯\_(ツ)_/¯

polishdude20
0 replies
15h9m

We did that at the previous place I worked!

phanimahesh
0 replies
5h24m

This also happens to be the easiest path. There are other options but no good ones

im3w1l
0 replies
15h12m

I mean browsers are built for and the best at displaying html+css. Given that they are "living standards", very few other programs can hope to keep up.

i386
0 replies
13h22m

This is how we exported designs at Canva. It works!

foul
0 replies
1d4h

I've seen a bit of SaaS and legacy websites-with-invoice-system doing that, with e.g. wkhtmltopdf. It isn't a lightweight solution, but it's a good hammer for a strange nail, a lot of off-the-shelf report systems suck.

doix
0 replies
15h12m

I did the same. We had a tool that would let you export to pdf. That pdf would be sent to our customers. Initially we just used the print functionality in the users browser, but that caused output to vary based on the browser/os used.

People complained that the PDFs generated were slightly different. So instead I had the client send over the entire html in a post request and open it up in a headless chrome with --print-to-pdf and then sent it back to the client.

simonbarker87
9 replies
1d6h

I had a GCP Cloud Run function that rendered videos. It was fine for one video per request but after that it slowed to a crawl and needed to shut down to clear out whatever was wrong. I assume a memory leak in MoviePy? Spent a couple of days looking at multiple options and trying different things, in the end I just duplicated the service so I had three of them and rotated which one we sent video renders to and do each render one at a time. It was by far the cheapest solution, means we processed them in parallel rather than serially so it was faster, all in all it worked a treat.

cr3ative
5 replies
1d5h

This reminds me of a service I recently found that was routinely crashing out and being restarted automatically. I fixed the crash, but it turns out it had ALWAYS been crashing on a reliable schedule - and keeping the service alive longer created a plethora of other issues, memory leaks being just one of them.

That was a structural crash and I should not have addressed it.

rolisz
3 replies
1d5h

How many memory leaks were discovered only during the winter code freeze, because there were no pushes being done, so no server restarts

jedberg
1 replies
14h34m

At reddit we would randomly select a process to kill every 10 minutes out of the 10 or so on each machine, just so they would all get a restart in case we didn't do a deployment for a few days.

At Amazon they schedule service bounces during code freeze for any service that is known to have memory leaks because it's easier than finding the leak, which isn't usually an issue since it gets deployed so often.

yjftsjthsd-h
0 replies
14h12m

And as a nice bonus you get chaos monkey for free:)

calvinmorrison
0 replies
1d5h

At Fastmail the ops team we ran fail overs all the time just to get our failures so reliable they worked no matter what. Only once in my tenure did a fail over fail and in that case there was a --yolo flag

simonbarker87
0 replies
1d5h

Oooh, you’ve just reminded me of the email server at my first dev job. It would crash every few days and no one could work out why. In the end someone just wrote a cron job type thing to restart it it once a day, problem solved!

seer
1 replies
14h31m

Hah welcome to cloudrun! I was evaluating it a few years ago to host some internal corporate app.

It worked great and was way easier to deploy than k8s setups. However after some testing we found out that the core logic of the app - a long running process, would just crawl to a halt after some time.

It turned out google wanted to push you to use their (paid) queue / pubsub solutions, but they didn’t want it to be _too_ obvious, so cloudrun would actually throttle its cpu sometime after the request it was spawned was returned.

Our logic was based on pushing stuff in a queue and having it be processed outside of a request, but google just f*ked with that solution.

And it would have been fine if that was upfront info, but it was buried in a doc page somewhere obscure, small print…

Thats the time I realized how bad gcp can be…

simonbarker87
0 replies
9h50m

Ah, that would make sense! It was always the second video being processed at about the same amount of time through it. Thanks

quickthrower2
0 replies
1d5h

What you call a hack everyone else calls devops :-). You have higher standards!

CharlieDigital
9 replies
1d6h

I worked at a startup where the core backend was 1 giant serverless Function. For error handling and recovery, the Function ran in a while loop.

For all its faults, it worked and it was generating revenue. Enough that the startup got to a sizable Series A. That experience completely changed how I think about writing code at startups.

naikrovek
5 replies
1d5h

this is great. if you look at older game source code you find things like this. things that we view as horrible hacks which are both extremely stable and perform well.

i see no reason to stop using these types of solutions, when appropriate.

muzani
2 replies
18h42m

Old games also didn't use a database, they saved everything in a giant text file.

I'm not sure if they were "extremely stable" though. Like Myspace, it might only work up until a certain point. What kills stuff is usually going viral.

naikrovek
1 replies
17h31m

I think you maybe underestimate the utility and reliability of flat text files on a filesystem.

If you don’t trust a filesystem, you can’t trust anything that uses one.

Flat files don’t scale past a certain point, but that point is way higher than most believe it is.

o11c
0 replies
13h15m

Specifically, flat files scale very far if there are no concurrent partial updates.

And "fork off a new process just to do the save" helps move that point farther away.

ricardobayes
0 replies
1d5h

Microcontroller/embedded stuff also.

lmm
0 replies
6h3m

Suppressing errors and continuing to run in a corrupt state is good for games (crashing is definitely not fun, continuing to run might be fun, unless it's the kind of game where you can get yourself locked out of winning long ahead of time) but not good for most other kinds of code - it's better for your accounting code to crash than to save the wrong numbers!

cr3ative
1 replies
1d5h

"on error resume next" never died, it just became serverless!

pmontra
0 replies
1d5h

A customer of mine wraps their Python code into try except pass so it never stops because of errors, it just skips what would have run in the code after the exception. I added some logging so we're slowly understanding what fails and why.

sevagh
0 replies
1d5h

Are they hiring?

sokoloff
7 replies
1d6h

I had an old boiler that would sometimes trip and lock out the heat until someone went down and power cycled it. (It was its own monstrous hack of a gas burner fitted to a 1950s oil boiler and I think a flame proving sensor was bad.)

Every time it happened, it made for a long heat up cycle to warm the water and rads and eventually the house.

So I built an Arduino controlled NC relay that removed power for 1 minute out of every 120. That was often enough to eliminate the effect of the fault, but not so often that I had concerns about filling too much gas if the boiler ever failed to ignite. 12 failed ignitions per day wouldn’t give a build up to be worried about.

That ~20 lines of code kept it working for several years until the boiler was replaced.

progbits
4 replies
1d5h

I have a similar one.

Our boiler has a pump to cycle hot water around the house - this makes it so you get warm water right away when you turn on a faucet and also prevents pipes in exterior walls from freezing in the winter.

This stopped working, the pump is fine but the boiler was no longer triggering it.

I just wired up mains through an esp32 relay board to the pump and configured a regular timer via esphome.

Temperature based logic would be even better but I didn't find a good way to measure pipe temperature yet.

sokoloff
2 replies
1d5h

I eventually switched to an ESP32 and added temperature graphing: https://imgur.com/a/VM7nD74

IIRC, I used an RTD that I had left over from a 3D printer upgrade, but an 18B20 would fine as well. A 10K NTC resistor might even be good enough. For what I needed (and I think for what you need), just fixing the sensor to the outside of the pipe [if metal] will give you a usable signal. That sensor was just metal HVAC taped to the front cast iron door of the burner chamber.

But a dead-simple timer solution gets you pretty far as you know.

progbits
1 replies
1d5h

The pipes are insulated and I didn't want to cut into that, but maybe a small hole for a sensor wouldn't be too bad.

But as you say, timer works good enough and that means little motivation to continue to work on it -- countless other projects await :)

BTW I've also tuned the timer to run for longer in the morning to get a hot shower ready.

Edit: nice dashboard, what are you using for the chart? I like the vintage look.

sokoloff
0 replies
1d4h

That is another somewhat hacky thing.

I have a mix of shame and pride that the chart (everything in the rectangle) is entirely hand-coded SVG elements emitted by the ESP web request handler.

myself248
0 replies
14h30m

I'm thiiiiiiis close to installing a circulating pump. I plan to power it off the bathroom lightswitch, which I might just replace with a motion sensor.

smallpipe
1 replies
1d5h

Couldn’t that be achieved with a mechanical timer switch and zero lines of code ?

yjftsjthsd-h
0 replies
14h11m

Probably that doesn't give you small enough time increments

btzs
7 replies
1d5h

As a 12 year old: I tried to overclock my first "good" own computer (AMD Duron 1200 MHz). System wouldn't start at 1600 MHz and I didn't know BIOS reset exists. I ended up putting the computer in the freezer and let it cool down for an hour. I placed the CRT display on top and the power/VGA keyboard cable going into the freezer. I managed to set it back to the original frequency before it died.

zeroCalories
2 replies
1d5h

When I was a teenager my friend would throw his laptop into the freezer for a few minutes every hour when we were playing games. He probably threw it in there hundreds of times, and it worked fine for years.

bombcar
0 replies
14h27m

A friend had an overheating laptop that even an external cooler couldn’t keep up with so he got to sit right next to the open door in winter.

We called it the Frozen Throne.

MountainMan1312
0 replies
1d3h

I don't know why but this reminds me of how we picture-framed my friend's old Wifi chip after replacing it, because that chip failing all the time was basically the core feature of our group's gaming sessions.

bryanlarsen
1 replies
1d4h

I kept a supply of coins in the freezer. I would regularly toss a few into the heatsink on my TRS-80 that was unstable after a RAM upgrade.

ad404b8a372f2b9
0 replies
9h5m

You guys are really smart. When I was a kid I had a graphics card that would overheat and crash the computer when I played Lineage. So I would get down under my desk and blow on it...

tomduncalf
0 replies
1d5h

Hahah this is amazing!

gia_ferrari
0 replies
11h2m

Once my phone died from a cracked solder joint. I had cold veggie sausages in the hotel room fridge. Holding my phone against the sausages let me grab a couple more files off of it. Saved my OTP keys that way (I've fixed my backups now :) )

DamnInteresting
7 replies
1d2h

Around 16 years ago, Wordpress security was just not up to snuff yet, and my popular Wordpress-based site kept getting hacked by pharmaceutical spammers and the like. After several such incidents, I wrote a "wrapper" that loaded before Wordpress to scrutinize incoming requests before a lick of Wordpress code was executed. It had blacklists, whitelists, automatic temporary IP blocking, and that sort of thing. There was no reason for visitors to upload files, so any non-admin POST request with a file upload was automatically smacked down.

It wasn't pretty, but the hackers never got through again, and that clunky thing is still in service today. I coded it to quarantine all illicit file uploads, and as a consequence I have many thousands of script kiddies' PHP dashboards from over the years.

justusthane
1 replies
14h28m

When you say it loaded before Wordpress loaded, what exactly does that mean? Was it a proxy that handled incoming requests and passed them off to Wordpress?

amiga-workbench
0 replies
9h58m

I imagine just including the code in index.php before the bootstrapping process actually loads up Wordpress for real. That way you could just halt the script early after noticing a funny request.

earthboundkid
1 replies
15h15m

That reminds me of my terrible spam prevention hack. We kept getting a bunch of spammers signing up for our newsletters, so I made the form require a JavaScript based hidden input to submit. That worked for a while, but then new spammers started executing the JS and getting through. So I added new JS that just waits 15 seconds before putting the right hidden values in the form, and that’s done the trick (for now).

jpc0
0 replies
10h43m

So CSRF?

kayodelycaon
0 replies
14h9m

Nice. I just had nginx requiring basic auth for anything hazardous.

Probably broke a few features I’m not using in the process. :)

gavinray
0 replies
1h36m

Hey just a thought:

16 years is a long time, many of those shell dashboard sources are probably lost.

It'd be interesting to upload them somewhere as a "historical museum" of sorts.

azeemba
0 replies
15h20m
timeagain
6 replies
14h13m

I had a database that was in a boot-crash loop because it had a corrupted innodb_history_list for a given table.

Everything would be ok if we could just delete the table, but that would involve opening a session and that wouldn’t be possible because of the immediate crashing.

On a whim I thought, “well what if I just have a really short amount of time to connect before it reboots?” So I opened up 5 terminal windows and executed “while true; do mysql -e ‘drop table xyz;’ done” in all of them

After about 10 minutes one of the hundreds of thousands of attempts to connect to this constantly rebooting database succeeded and I was able to go home at a reasonable time.

2024throwaway
5 replies
13h45m

This is why remote work is so important. You could have been home the entire time.

SCUSKU
3 replies
13h37m

To play devil's advocate, had OP not wanted to go home so badly Parkinson's Law would've kicked in and OP may have tried to do things the "right way" which may have taken much longer.

hutzlibu
1 replies
8h34m

Any ideas on what the "right way" would be in this case? To me the solution seems the most straightforward.

fragmede
0 replies
6h43m

Drop the table from some sort of safe mode, or figure out the bad entry in the table and hex edit the file to exclude it, or find/write some sort of recovery/fsck problem for the particular database flavor in question. Those are three alternatives that come to mind for me, which is to say, I wouldn't have thought to spam the db like that. Neat trick!

plugin-baby
0 replies
8h45m

Also if the DB was on-prem, then the latency when connecting from home might have been too high for the hack to work.

GuB-42
0 replies
3h40m

This actually highlights a negative aspect of remote work. When you work from home, it is easy to lose track of time and ending up working the whole night. Here GP had a clear motivation, solve the problem in time to get back home and presumably disconnect.

That's why I actually like to work on site on Fridays. Because I know that when I leave the office, I am done for the weekend. And if I stay for too long, security will kindly remind me that the office is closing and I should leave. So laptop turned off, in the bag, and it stays there for the weekend. Even better if Monday is also on site, since I can just leave the laptop in the office, locked away.

It is a psychological trick, but it works for me. Your mileage may varry.

On a more technical note, don't assume the database can be administered over the internet/VPN. Real private networks still exist.

singingfish
6 replies
11h39m

This regex that took about 10 minutes to generate was amazingly effective and helped me earn my highest ever daily rate https://gist.github.com/dr-kd/d43c884fbac0089d8523

wodenokoto
1 replies
7h45m

How is that better than just checking against a list of suburb names?

_0ffh
0 replies
3h35m

The regex is a compressed representation which saves memory, and it's also likely to be quite a bit faster which saves cycles. I consider it a clever bit of optimisation.

caesil
1 replies
11h13m

that took about 10 minutes to generate

How?

Matheus28
0 replies
10h39m

I'm assuming:

    1. Generate a list of suburbs names separated by |
    2. Simplify the regex:
    2.1. Convert to a NFA
    2.2. Turn it into a DFA
    2.3. Merge redundant states
    2.4. Turn it back into a regex

veeti
0 replies
7h51m

Do they ever coin new suburb names? What happens then?

phanimahesh
0 replies
7h34m

How did you generate this?

nomilk
6 replies
15h40m

Mine's getting command output out of docker. For long builds (I had one that took 4 hours), it was gutting to have it fail a long way in and not be able to see the output of the RUN commands for thorough debugging.

So I devised a stupidly simple way: add && echo "asdfasdfsadf" after each RUN command. I mashed the keyboard each time to come up with some nonsense token. That way, docker would see RUN lines as different each time it built, which would prevent it using the cached layer, and thus would provide the commands' output.

I wrote the same thing (more completely) here: https://stackoverflow.com/a/73893889/5783745

(a comment on that answer provides an even better solution - use the timestamp to generate the nonsense token for you)

As stupid as this solution is, I've yet to find a better way.

sopooneo
1 replies
15h37m

I'm a Docker amateur, so this will be a dumb question, but if you were using that technique after every run line in a DockerFile, wouldn't they be the same every time they're run? Like, it's random, but it's the same random values stored in the file, so wouldn't the lines get cached? Or did you adjust the DockerFile each time?

Or am I completely missunderstanding?

nomilk
0 replies
15h34m

Or am I completely missunderstanding?

No, I just didn't explain it very well. You have to mash the keyboard each time (i.e. each build) to come up with some new token. The reason this (dumb) idea was so useful was it was a choice between that (dumb idea) and either run with --no-cache (i.e. wait 2-3 hours) or build normally and not have a clear idea why it failed (since no console output for cached layers), so taking a moment to mash the keyboard in a few places (as absolutely stupid as that is) was way better than the alternative of not having complete console output (docker provides no way to --no-cache on a per layer basis, hence my stupid way of achieving it).

hyperhopper
1 replies
8h49m

Why cant you echo a newly generated guid?

nomilk
0 replies
8h43m

I had tried && openssl rand -base64 12 but docker assumes if the command hasn’t changed (even though it generates a new random value) then it can rely on the cache for that layer. A comment below the answer on stack overflow points out you could do the same thing using an environment variable (with guid or datetime to ensure no duplicates), that’s a nice approach

sim7c00
0 replies
5h46m

a lot of my debugging involves just echo or print 'hello-N' tokens. some systems just need this :D love it. "hello-9". yay it almost worked xD.

jddj
0 replies
7h49m

This reminded me of an ongoing hack that I have running in a production webapp.

When I was writing it there was no easy opt-in cache busting (renaming scripts at build time, or applying a static random querystring to the script requests, etc) in the minimal framework I was using and I was getting away with not using a task runner or anything like that.

It was one of those "would be nice, but effort is better spent elsewhere" things, so to this day before I do releases I still bump a hardcoded version number that I pass to the templating engine for each route.

I should fix that.

fzeindl
6 replies
13h59m

I wanted a smart thermostat but my 30 year old natural gas heater didn‘t support them. I only had a wheel which I could turn to set the temperature.

So I took double sided tape, stuck a plastic gear on the wheel and put a servo on with another gear on the side, connected to a raspberry pi, that would turn the wheel when my phone would enter a geofence around the flat.

Picture: https://ibb.co/nDvwndp

I even had a bash script to calibrate the servo which would turn the wheel and ask which temperature it set, so it could figure out the step size.

lqet
0 replies
7h51m

This reminds me of an old Bosch clock I own. This was one of the early electric consumer clocks. It looks very futuristic. But inside, it is simply a mechanical clock with an electric motor attached to the wind-up mechanism. Every few minutes, the motor spins up for a second and winds the clock up again.

https://www.youtube.com/watch?app=desktop&v=0DU0KX9gIk8

The clock is extremely reliable, though. The last batteries lasted for 10 (!) years.

lloeki
0 replies
10h32m

I've been smartifying dumb devices at home as well and came up with similar rubegoldbergesque solutions, although ultimately I didn't need to actually implement any of these as I found other ways to achieve my goals.

One of them involved pointing an old webcam to a segment display, convert shitty image to monochrome in a way that makes vague shapes for the digits and other state icons but clamps everything else to oblivion, and just use some fixed pixel position to "read" and get device state from that.

Made some fun prototyping though.

Also reminded me of: https://thedailywtf.com/articles/ITAPPMONROBOT

eps
0 replies
7h53m

This doesn't really qualify as "stupid" though.

That's just the good old "interfacing with legacy systems" routine. :)

callalex
0 replies
10h19m

I just took a 24v AC wall wart power supply and shoved it in the same terminals as the hot and neutral control on the smart thermostat. The A in AC makes this arrangement work just fine to allow the battery of the thermostat to charge without zapping anything.

ajsnigrutin
0 replies
9h14m

There are whole product ranges for stuff like this now (stuff, where you cannot change the interface that is built for fingers and not automation).

eg:

https://www.youtube.com/watch?v=6RJ-zWJcEKc (not affiliated, cheaper models available on aliexpress, both bluetooth and zigbee) - you stick it on somewhere and it pushes a button for you. With added accessories, it can even push (technically pull) the (eg.) light switch the other way, so you only ned one switch per light switch. You can also use it to restart a server/pc, press a tv button remote or even a garage/ramp opener, etc., with zero electronic knowledge and modification (if you're renting out and don't want to replace stuff).

NetOpWibby
0 replies
13h6m

Brilliant

anon_cow1111
6 replies
11h58m

All of these other anecdotes are (understandably) sysadmin or developer-related, though I have a couple very stupid ones that are completely physical and mechanical.

My first residential plumbing fix was a simple unclogging of a shower drain, which was accomplished by putting a couple inches of water in the tub and blasting repeatedly into the drain with a pneumatic cannon. The splatters hit the ceiling, but it was thoroughly unclogged after a few repetitions.

The second major one, was the sewer line at mom's place, which ended up being very bad clog that nasty industrial drain cleaner would not even touch. I capped every drain and vent in the entire house, and dumped two 30 gallon compressor tanks of air into the two toilet hookups. There was great and concerning watery rumbling from all through the house, but after a 30 seconds or so, the clog was blown into the septic and has not given her a single problem since then.

sowbug
3 replies
11h39m

For those without a pneumatic cannon, look for a "drain bladder" that attaches to a garden hose. Costs about $10. Mine has never failed me.

Levitz
1 replies
5h57m

I know little about plumbing, don't you risk breaking something by exerting pressure on the installation?

pierat
0 replies
3h50m

I've broken 2" PVC pipes by using a plunger, baking soda, and vinegar. You can easily get way above rated pressures doing that... doubly so if the PVC is older.

After doing that, NOT fixing the problem, and still having a massive plug, I did the next best thing. I cut the run of offending pipe out (clog included), used PVC primer/glue with a fixed joiner, and then bought a thick rubber joiner with 2 screw retaining rings, and joined the other side that way.

In hindsight, I absolutely should have rented a power auger and chipped at the clog and freed it. But I didn't.

anon_cow1111
0 replies
11h28m

*Note for those with a drain bladder, it will be much safer and more practical than blasting something with a cannon, but not as much fun as blasting something with a cannon.

eps
0 replies
7h44m

Unclogged a toilet once by lifting the seat, wrapping its bowl in kitchen plastic wrap and then using the resulting "membrane" to pump enclosed air volume down the drain. To my complete astonishment it worked on a second try!

blub
0 replies
11h12m

I was kind of scared to read until the end. :)

How did you cap them so that the caps didn’t get blown off by the pressure?

acheong08
6 replies
1d5h

This was a while ago so it probably wouldn’t work today.

I had to get past a captcha for automation and the solution I came up with was to always choose 2. If it was incorrect, just request a new captcha until it passed. For some reason, 2 was the answer most of the time so it actually rarely had to retry anyways

Snacklive
3 replies
1d5h

Definitely wouldn't work today. Nowadays you need to classify like 30 images of bicycles and 20 fire hydrants and pray to god before they accept your answer...

acheong08
2 replies
1d3h

This is why I don’t have an account with Snapchat/Instagram/etc. I tried signing up and physically couldn’t get past the challenges. I take too long to solve them and then I’m asked to solve more…

bonton89
1 replies
1d1h

Sometimes if they hate your client and ip they put you into a captcha tar pit that you only think you could get out of. Only a bot would keep trying but a human will die in there too even if they have the tenacity of a bot.

yowlingcat
0 replies
2h28m

I've noticed this too. Seems like this should be illegal.

teddyh
1 replies
14h26m

For some reason, 2 was the answer most of the time

I’m getting flashbacks to LensLok™; the two-letter codes were often very hard to read though the plastic lens, and when you’d get the code wrong enough times, you’d have to load in the entire program again (from tape!), which took ages. There was also a “training” mode, of sorts, to help people familiarize themselves with reading the codes. In the training mode, the code letters were always “OK”. But here’s the kicker: For some unfathomable reason, the real code was very often (but not always) also the letters “OK”! So it was easier (at least on your eyes) to just always enter the letters “OK”, and hope it worked. What time you lost reloading for when it didn’t work you’d save by not having to adjust the scale to the size of your TV every single time.

exikyut
0 replies
9h14m

Oof, wow. That's a fun little rabbithole :x

I found one video demonstration: https://youtu.be/Wpn9sLNg-6k?t=310

zblesk
5 replies
20h43m

I implemented an enterprise data migration in javascript, running in end-user's browsers. (So no server-side node.js or such.)

It was a project scheduled for 2-3 months, for a large corporation. The customer wanted a button that a user would click in the old system, requesting a record to be copied over to the new system (Dynamics CRM). Since the systems would be used in parallel for a time, it could be done repeatedly, with later clicks of the button sending updates to the new system.

I designed it to run on an integration server in a dedicated WS, nothing extraordinary. But 3 days before the scheduled end of the project, it became clear that the customer simply will not have the server to run the WS on. They were incapable of provisioning it and configuring the network.

So I came up with a silly solution: hey, the user will already be logged in to both systems, so let's do it in their browser. The user clicked the button in the old system, which invoked a javascript that prepared the data to migrate into a payload (data -> JSON -> Base64 -> URL escape) and GET-ed it in a URL parameter onto a 'New Record' creation form into the new system. That entire record type was just my shim; when its form loaded, it woke another javascript up, which triggered a Save, which triggered a server-side plugin that decoded and parsed the data, which then processed them, triggering like 30 other plugins that were already there - some of them sending data on into a different system.

I coded this over the weekend and handed it in, with the caveat that since it has to be a GET request, it simply will not work if the data payload exceeds the maximum URL length allowed by the server, ha ha. You will not be surprised to learn the payload contained large HTMLs from rich text editors, so it did happen a few times. But it ran successfully for over a year until the old system eventually was fully deprecated.

(Shout out to my boss, who was grateful for the solution and automatically offered to pay for the overtime.)

MBCook
4 replies
14h25m

That’s horrible. I love it!

I’m not quite sure I understand why it was GET though. No way of running something like fetch or (more likely) XMLHTTPRequest?

YouWhy
3 replies
13h24m

I think the OP (hat off!) needed a way to transfer data to the front-end of another application. Since there's no back end involved, the only available channel is the request URL

mkl
1 replies
8h48m

Since there's no back end involved, the only available channel is the request URL

Not quite. I have a system that uses a custom userscript to create an extra button on certain webpages that, when clicked, scrapes some data from the current page and copies a lightly encoded version to the user's clipboard. They then switch to another webpage and paste it in a box.

I've also gotten data from place to place using scraping from temporary iframes (same site).

zblesk
0 replies
1h42m

That guess was actually quite close. The target system does support the GET way out of the box as a way to pre-fill data into a form; but only over GET.

MBCook
0 replies
13h20m

Oh that would make sense. Thanks for the guess.

rpastuszak
4 replies
8h30m

Fixing a CD drive with Polish Kielbasa:

The CD drive in my first computer broke. We couldn't afford to get a new one, and after almost a year of using floppies I got a bit tired of having to carry them across the mountains every time I wanted play a new game. (context: I lived in a small village in southern Poland at the time -- imagine Twin Peaks, but with fewer people and bigger hills). Sometimes getting a copy of quake or Win 95 took several trips back and forth as I didn't have enough floppies and the ones I had would get corrupted.

I turned 10 and finally decided to disassemble the drive and try to fix it. I found the culprit, but I realised that I needed a lubricant for one of the gears. In that exact moment my little brother was just passing by our "computer room", eating a bun with kielbasa (the smoky, greasy kind which is hard to find outside of PL). I put some of that stuff on a cotton swab, lubricated the gears in the drive, and magically fixed it. The drive outlived my old computer (may it rest in pieces). I miss a good Kielbasa Wiejska.

lawgimenez
0 replies
1h19m

Reminds me of my brother where he would bring 6-7 floppies to a cafe just to download an anti-virus update.

inglor_cz
0 replies
5h8m

Animal fats make a very good lubricant if the temperature of the parts doesn't rise too high.

Once upon a time, car transmissions used whale oil.

eps
0 replies
7h32m

This is glorious.

badsectoracula
0 replies
7h26m

I have a similar story, except instead a CD drive and kielbasa i had a floppy drive (that was on an XT clone) and i used oil... that is olive oil :-).

It worked perfectly for years after that.

leonheld
4 replies
1d6h

`sed` text files as a replacement for templating.

In the text file you have something you want to template (or "parametrize") from an outside variable, so you name that something like @@VAR@@ and then you can sed that @@VAR@@ :-)

j4yav
1 replies
1d6h

Wait, you're telling me this isn't a Best Practice™?

zblesk
0 replies
20h11m

It totally is.

teddyh
0 replies
14h32m

I think the m4 macro processor might be the more canonical way to do it. Unless you’re writing a shell script, in which case a “here-document” with embedded $VARIABLES is more straightforward.

Semaphor
0 replies
1d5h

That's how we do it. Not with sed exactly, but string replacement. One is a bull email sender that only supports VBScript, the other is C#, but the users aren't supposed (or need) to have full templating powers, so this way it's easier.

omgbear
3 replies
1d3h

I built a vacation plant waterer with some tubing, 3d-printed heads, submersible pumps and an Arduino. For a longer trip, I needed a source of water that I could pump from.

I realized the toilet tank is self-refilling because of the float valve and won't overflow . So clean it out and a good place to pump my submersible pumps.

myself248
2 replies
14h23m

Oh this is genius. That might just be my new humidifier hack.

moozilla
1 replies
8h20m

FYI you shouldn't use anything other than DI water in humidifiers. Using tap water can cause bacterial/fungal buildup and emits harmful particulates into the air.

https://dynomight.net/humidifiers/

stormking
0 replies
7h24m

The risk of bacterial infection is overblown. But you still don't want to use tap water for a humidifier because you'll get limescale all around the device.

hennell
3 replies
1d4h

Needed to get data out of a CRM system for specific printed orders - when it was printed, who processed it, what was on the order etc.

The process of authenticating with the CRM was complex and there wasn't a way to get anything at print time and most of the data was stored all over the place.

But I found the printed report knew almost everything I wanted, and you could add web images to the paperwork system. So I added a tiny image with variable names like "{order_number}.jpg?ref={XXX}&per={YYY}" and then one for each looped product like {order_number}/{sku}.jpg?count={X}&text=..." etc. After a few stupid issues (like no support for https, and numbers sometimes being european format) it was working and has remained solid ever since. Live time stamped data, updates if people print twice, gives us everything we wanted just by a very silly method.

azalemeth
2 replies
8h13m

As a European I'd like to know how our numbers are differently formatted to yours! (Did you mean dates? The never ending source of pain!)

gia_ferrari
1 replies
8h2m

To me (USA) 1,000 is one thousand and 1.000 is one, and to me (Italian) 1,000 is one and 1.000 is one thousand :)

azalemeth
0 replies
4h33m

Ahh, that one. Do'h!

gcanyon
3 replies
14h37m

Back in the '90s I consulted at HBO, and they were migrating from MS Mail on Mac servers to MS Exchange on PCs. Problem was that MS Mail on the Mac had no address book export function, and execs often have thousands or even tens of thousands of contacts. The default solution was for personal assistants to copy out the contacts one by one.

So I experimented with screen hotkey tools. I knew about QuicKeys, but its logic and flow control at the time was somewhat limited. Enter <some program I can't remember the name of> which had a full programming language.

I wrote and debugged a tool that:

   1. Listened to its own email box: cole.exporter@hbo.com
   2. You emailed it your password (security? what security?)
   3. Seeing such an email, it logged out of its own email and logged in to yours.
   4. Then it opened your address book and copied out entries one by one. 
   5. It couldn't tell by any other method that it had reached the end of your address book, so if it saw the same contact several times in a row it would stop.
   6. Then it formatted your address book into a CSV for importing to Exchange, and emailed it back to you.
   7. It logged out of your account, and back into its own, and resumed waiting for an incoming email.
This had to work for several thousand employees over a few weeks. I had 4 headless pizza box Macs in my office running this code. Several things could go wrong, since all the code was just assuming that the UI would be the same every time. So while in the "waiting" state I had the Macs "beep" once per minute, and each had a custom beep sound, which was just me saying "one" "two" "three" and "four". So my office had my voice going off an average of once every fifteen seconds for several weeks.

binwiederhier
1 replies
14h9m

The voice thing is hilarious. Thanks for sharing.

myself248
0 replies
3h13m

I did a similar thing in the Win9x days. I had some sound alert going off once in a while and I couldn't figure out what was causing it, worse, I didn't even recognize the sound. (It wasn't the standard "ding" or "chord".)

And when I went into the Windows sound scheme configurator, it had wacky names for some events like "asterisk" and "critical stop", with no explanation of what might trigger them.

So as a first step of narrowing it down, I made self-explanatory sounds for everything: I just recorded my voice saying "open program", "program error", "restore down", "exclamation", and so on, through the whole list, and assigned each sound to its respective event. There were a lot of them!

(Mind you, it was all the rage at the time to have whole collections of funny sounds assigned to all this stuff, movie lines and SFX and what-not, so there were these subtle games of one-upmanship to have a cooler sound scheme than anyone else.)

Not me. I had created the world's most humorless sound scheme. The only possible improvement would've been Ben Stein voicing the whole thing.

But in doing so, after a while, it took on this air of absolute hilarity. Like here's this machine that's capable of anything, it could make a star-trek-transporter sound, but there's just some guy's voice saying "empty recycle bin" with a flat, bored affect.

electrondood
0 replies
13h59m

Cole ExPorter. lol.

Saigonautica
3 replies
10h12m

Years back, there was a CDROM drive I really needed to work at the moment, but it was jammed shut. I could hear it trying to open, and failing.

I reasoned that the motor holding it shut had mostly failed in some way, but could still exert some force. So I popped the cover off the drive, took out the magnet that holds it shut, cooked it on a gas stove for a few seconds, and put it back in.

Th Curie temperature for a neodymium magnet is a few hundred of degrees, but practically speaking, they will lose a lot of magnetism even at lower temperatures. Popped it back in and the drive worked for another year or so.

wodenokoto
2 replies
8h43m

Didn’t all cdrom drives have a pin-hole you could jam a needle into and manually open them?

smeej
0 replies
6h54m

Sure, but weakening the magnet lets it return to more or less "normal" functionality, not requiring you to reach over and stick a pin in a very small hole every time you want to change the disk.

austinjp
0 replies
8h32m

"It's not stupid if it works" :)

Kinrany
3 replies
1d6h

I used SQLite for coordination between processes. It was a huge Python application that originally used the multiprocessing library and had to be migrated to Rust.

In hindsight, it would have been better to use a local HTTP server. Seemed like overkill at the time.

dehrmann
2 replies
13h17m

I'm confused. Was SQLite for the migration?

hnfong
1 replies
12h56m

Probably IPC.

dehrmann
0 replies
28m

What I meant is that doesn't make sense for the Python version because multiprocessing has its own mechanisms for doing that. Same for the post-migration Rust binary.

smallpipe
2 replies
1d5h

Monkey patching vendor code. They agreed their code didn’t work and produced wrong results, but the correct version would be slower, so they didn’t want to change it.

So I dynamically replaced the part of their code that was wrong. That monkey patch has run years and is still going :)

mcv
0 replies
1d5h

Who cares if it's wrong, as long as it's fast?

MBCook
0 replies
13h51m

You just reminded me of an old story.

The company I was working for purchased some kind of email spamming program to send mass emails. However they quickly found that when more than a small number (10 to 15,000?) of addresses were in the list it took forever.

Support wasn’t being helpful, so I decided to look at it. It was some kind of PHP application and what I eventually found was it was doing a database query doing table scans for everything because there was a missing index that was incredibly obvious.

I added the index, the program worked well into huge numbers of emails addresses, and we informed the vendor.

The program was soon updated and it fixed the problem as well. Also all the PHP code was now obfuscated/encrypted.

I guess they didn’t like my helpful nature.

shw1n
2 replies
11h6m

My wife was pregnant and tired of doing cold video outreach for our recruiting business, so I wrote a python script that took her best video and just spliced in audio of her saying different people’s names and LinkedIn profiles.

To hide the lips not matching the audio, she did a little wave that covered her mouth at just the right time.

Example: https://www.loom.com/share/3162a767905c422b8fd423f7448e16f8

It ended up working so well it generated us over $500k in new business

And I ended up turning it into a SaaS (https://dopplio.com)

WizardClickBoy
1 replies
10h20m

I guess you're getting downvoted because this is basically video spam, but it's still clever!

shw1n
0 replies
1h55m

Thanks! Yeah it was really just a funny experiment in the beginning, I didn’t think it’d work haha

rcthompson
2 replies
12h44m

I'm in the middle of such a story right now. I'm doing research on a data set of COVID-19 hospital patients with multiple blood samples over time from each patient. The obvious thing we want to do with this data is to line up all the samples on a single timeline so we can see how the data changes over the course of COVID-19 from infection to resolution. Unfortunately, as with most infectious diseases, no one knows exactly when they were actually infected, which means we can't just sort them by time since infection and be done with it.

So, we set out to find some way of inferring the timeline from the data itself (RNA-seq and other molecular assays from the blood, in this case). The first thing we tried was to apply some standard methods for "pseudo-time" analysis, but these methods are designed for a different kind of data (single-cell RNA-seq) and turned out not to work on our data: for any given patient, these methods were only slightly better than a coin flip at telling whether Sample 2 should come after Sample 1.

Eventually, we gave up on that and tried to come up with our own method. I can't give the details yet since we're currently in the process of writing the paper, but suffice it to say that the method we landed on was the result of repeatedly applying the principle of "try the stupidest thing that works" at every step: assuming linearity, assuming independence, etc. with no real justification. As an example, we wanted an unbiased estimate of a parameter, and we found one way that consistently overestimated it in simulations and another that consistently underestimated it. So what did we use as our final estimate? Well, the mean of the overestimate and the underestimate, obviously!

All the while I was implementing this method, I was convinced it couldn't possibly work. My boss encouraged me to keep going, and I did. And it's a good thing he did, because this "stupidest possible" method has stood up to every test we've thrown at it. When I first saw the numbers, I was sure I had made an error somewhere, and I went bug hunting. But it works in extensive simulations. It works in in vitro data. It works in our COVID-19 data set. It works in other COVID-19 data sets. It works in data sets for other diseases. All the statisticians we've talked to agree that the results look solid. After slicing and dicing the simulation data, we even have some intuition for why it works (and when it doesn't).

And like I said, now we're preparing to publish it in the next few months. As far as we're aware (and we've done a lot of searching), there's no published method for doing what ours does: taking a bunch of small sample timelines from individual patients and assembling them into one big timeline, so you can analyze your whole data set on one big timeline of disease progression.

sokz
1 replies
12h34m

I'd love to read it when it comes out. Where should I look for it when its published? Ignore if its a stupid question.

rcthompson
0 replies
12h27m

We will be posting it on a preprint server when we're ready to submit it to journal review, hopefully some time in February (but who knows, publishing timelines are murky at the best of times). The title will be something along the lines of "Reconstructing the temporal development of COVID-19 from sparse longitudinal molecular data", though that's likely to change somewhat.

r0ze-at-hn
2 replies
12h15m

When I was younger I learned that Sed was turing complete. So I did was any young woman would do. I build an entire search engine in Sed. But it wasn't like a useless little search tool that provided bad search capability for a website, no, nearly every page (minus a few like the about page) of a site was nothing more than a presentation on top of the search query results. Several thousand hardcoded "known" pages and infinite possible pages depending on user searches. Because it was the foundation of the site, search worked unlike most websites searches of the era (~2005). This site happily ran for about a decade with surges of traffic now and then before a server migration and too little time prevented its continued existence.

Adult me is both horrified and impressed at this creation.

wodenokoto
0 replies
8h4m

Did you have a file named “main.sed” or was everything a giant bash script that started with “sed $(cat << EOF …”?

phanimahesh
0 replies
7h38m

Wow. Out of everything, this is the most impressive.

nisalperi
2 replies
13h47m

I built a writing/formatting product now used by 60k+ indie authors. One of the requirements was to format PDFs for print publishing with different themes and configurable layouts. Instead of building a custom PDF rendering engine, I decided to use Puppeteer to generate the PDFs.

But there were a bunch of issues we had to deal with:

- To render the gutter (margin in the middle) you had to know which side of the book each page would fall on.

- To generate the headers and footers, you had to know the exact page number for each of the pages.

- You had to know how many pages the table of contents would take up, but you couldn't know the page numbers for each chapter until the book was fully generated.

What I ended up doing was to generate multiple PDFs for each chapter, header, footer, and table of contents separately, then stitching them together very carefully to build the final export. Super hacky, but it ain't stupid if it works!

magnio
0 replies
13h45m

Given what I know about the PDF spec and the various papercut I had with various PDF tools, your solution seems as sane as it can be tbh.

azalemeth
0 replies
6h41m

Out of curiosity, did you consider transpiling to LaTeX? Memoir.cls is fantastic at those sorts of calculations and actually the formatting output from pandoc is usually quite good too.

menage
2 replies
12h24m

At Google almost 20 years ago, a bunch of our machines (possibly with slightly bespoke CPUs?) were behaving oddly. These machines were mostly in use for serving Google's web index, so almost the entire RAM was devoted to index data; the indexserver processes were designed to be robust against hardware failure, and if they noticed any kind of corruption they'd dump and reload their data. We noticed that they were dumping/reloading massively more often than they'd expect.

Eventually the cause was narrowed down to that, randomly when the machine was stressed, the second half (actually, the final 2052 bytes) of some physical page in memory would get zeroed out. This wasn't great for the indexservers but they survived due to the defensive way that they accessed their data. But when we tried to use these new machines for Gmail, it was disastrous - random zeroing of general process code/data or even kernel data meant things were crashing hard.

We noticed from the kernel panic dumps (Google had a feature that sent kernel panics over the network to a central collector, which got a lot of use around this time) that a small number of pages were showing up in crash dump registers far more often than would statistically be expected. This suggested that the zeroing wasn't completely random. So we added a list of "bad pages" that would be forcefully removed from the kernel's allocator at boot time, so those pages would never be allocated for the kernel or any process. Any time we saw more than a few instances of some page address in a kernel panic dump, we added it to the list for the next kernel build. Like magic, this dropped the rate of crashes down into the noise level.

The root cause of the problem was never really determined (probably some kind of chipset bug) and those machines are long obsolete now. But it was somehow discovered that if you reset the machine via poking some register in the northbridge rather than via the normal reset mechanism, the problem went away entirely. So for years the Google bootup scripts included a check for this kind of CPU/chipset, followed by a check of how the last reset had been performed (via a marker file) and if it wasn't the special hard reset, adding the marker file and poking the northbridge to reset again. These machines took far far longer than any other machines in the fleet to reboot due to these extra checks and the double reboot, but it worked.

gavinray
0 replies
1h39m

I have a burning desire to know what was going on here -- fascinating.

MobiusHorizons
0 replies
1h25m

That’s awesome

j4yav
2 replies
1d6h

It's not the best story - I'm sure there are some greats here - but I tricked GitLab into running scripts that looked like https://gitlab.com/jyavorska/c64exec/-/blob/master/.gitlab-c... by modifying a runner to pass everything through the VICE Commodore BASIC emulator. It would even attach the output file as an artifact to the build.

gcr
1 replies
1d5h

That’s incredible! Why though??

j4yav
0 replies
1d3h

A small contribution to the increase of nonsense in the world

dmazzoni
2 replies
10h15m

In the early days of Google Chrome, I was tasked with making it work with Windows screen readers. Now, accessibility APIs on Windows were documented, but web browsers used a bunch of additional APIs that were poorly documented. Chrome's design was radically different than Firefox's and IE's, so it was a challenge to implement the APIs correctly. At first I was just trying to get it to work with completely static web pages.

Screen readers were reading form controls, but no matter what I did they weren't activating any of their web-specific features in Chrome. I spent weeks carefully comparing every single API between Firefox and Chrome, making the tiniest changes until they produced identical results - but still no luck.

Finally, out of ideas, I thought to build Chrome but rename the executable to firefox.exe before running it. Surely, I thought, they hadn't hard-coded the executable names of browsers.

But of course they had. Suddenly all of these features started working.

Now that I knew what to ask for, I reached out and made contact with the screen reader vendor and asked them to treat Chrome as a web browser. I learned another important lesson, that I probably shouldn't have waited so long to reach out and ask for help. It ended up being a long journey to make Chrome work well with screen readers, but I'm happy with the end result.

throwaway914
0 replies
10h9m

That is a good story :) It makes me wonder what future Wayland protocol may enable clients/apps to advertise themselves as screen-reader capable.

jwoq9118
0 replies
6h38m

Love the lesson here. Don’t wait to reach out for help!

tbensky
1 replies
10h16m

Back in the early days of Linux (~1993) or so, I was trying to convince my workgroup in grad. school that Linux was a nice alternative for running our numerical simulations (in C). But I knew I had to get the graphics working, since no one wanted to use a text-only interface. It was a Dell PC using some graphics card; the word 'tulip' is in the fog of my memory. The graphics driver was not loading, giving some error about a reference not being found in the .o file. I didn't know what to do, so I loaded up the .o file in vi, searched for the offending reference, edited it out and saved the .o file. From then on the graphics works great!

minimaul
0 replies
5h54m

tulip is a set of DEC ethernet controllers that are from about the right time as your anecdote :)

swissfunicular
1 replies
14h12m

My corporate VPN wouldn't connect to same public IP after 24 hours (I work from home). So I wrote a very crude bash script on Raspberry Pi Zero 2W (which also runs local DNS server), which would telnet into the router 5 minutes before my login time, and reboot the router

rnoorda
0 replies
13h23m

I had a router that would need to be restarted every once in a while, so I just plugged it into a Christmas light timer to go off for 15 minutes early every morning. Worked well until we left that apartment.

ramses0
1 replies
11h54m

...an awful one for posterity: an abomination of vim + awk as a proto-protocol plus UI editor.

Basically awk would match `/^FOO / { system("foo.exe $0") }`

...you could get pretty darned far with that mechanism, for near minimal amounts of code.

Any time you pressed "enter" on a line of text in vim, it'd get thrown through that awk script.

If a line matched a command matched in the awk file (think GET, POST, SEARCH, ADD, etc), it'd execute that awk block, which was often just calling over to another executable which did the searching, adding, etc.

The interesting thing about it was using it as a UI... you could basically "expand in place" any particular line... have commands return subsequent commands to pick from, etc.

Plus the ability to "undo" via vim commands and the fluency of effectively an ad-hoc REPL was a really liberating experience.

anthk
0 replies
5h7m

You reinvented Acme in vi.

paulgb
1 replies
1d5h

I used a WiFi smart switch and a USB thermometer to make a sous vide cooker. I plugged a slow cooker into the smart socket, put the thermometer in it, and wrote a program to turn the switch on/off depending on the temperature the thermometer registered.

myself248
0 replies
2h51m

A friend of mine brute-forced the access key on a particular piece of industrial hardware by using an X10 outlet relay module, predecessor to today's wifi-connected smart sockets. The key was only 4 digits long and it would lock out after 3 attempts and require a reboot, so he was looking at a few thousand reboots each taking a minute or so. Easy enough to run over a weekend.

It ended up wearing out one relay, but he had a spare and just swapped it in and restarted the script from where it hung up. For the cost of like $30 and a few days, it gave us a new unobtanium unlocked unit.

mmcgaha
1 replies
12h51m

This is a little different than what you are asking but when I was a kid my C64 would crash if it was on too long and I would lose all of my work. If I ran my mother's vacuum cleaner plugged into the same outlet as the C64 then it would not crash. No I cannot explain it but it worked so that vacuum cleaner would get turned on before I started to save my work to tape.

sowbug
0 replies
12h38m

The vacuum probably lowered the voltage, preventing a component from overheating.

jcalvinowens
1 replies
15h18m

I "fixed" an appliance that was nuisance tripping an AFCI breaker by wrapping the power cord one turn through a ferrite choke.

Noumenon72
0 replies
14h42m

I like how ChatGPT lets you speak efficient jargon while I can read in layman terms. It says an Arc Fault Circuit Interrupter is supposed to detect arcing electrical faults, but some appliances have arcs in their normal operation, causing nuisance trips. A ferrite choke is a ring of magnetic ceramic that is designed to suppress high-frequency electromagnetic interference in electronic circuits. Clever and practical, says ChatGPT.

jawns
1 replies
1d5h

I worked for a US media company that forced us to use a half-baked CMS from a Norwegian software company, with no apparent provisions in the contract for updates or support.

The CMS was absolutely terrible to work in. Just one small example: It forced every paragraph into a new textarea, so if you were editing a longer news story with 30 or 40 paragraphs, you had to work with 30 or 40 separate textareas.

So I basically built a shadow CMS on top of the crappy CMS, via a browser extension. It was slick, it increased productivity, it decreased frustration among the editors, and it solved a real business problem.

If we had had a security team, I'm sure they would have shut it down quickly. But the company didn't want to pay for that, either!

matsemann
0 replies
9h29m

Enonic? As a contractor in Norway I saw that multiple places before headless etc became popular.

One hack I remember that fit this thread, is someone using Enonic as a headless cms long before headless cms was a thing. Basically every string in their frontend apps had a key, and that key was a hierarchy of articles in Enonic. So a whole "enonic article" for every single piece of text, like, every button, heading, menu element was backed by its own article.

That meant that the editors could edit any piece of text in the SPA from Enonic. An article in /cms/myapp/something/myelement we then would export to the key myapp.something.myelement, and we did that for all ten thousands of small text string articles, and then built that into the SPA with a sync job running regularly.

We also had a way to turn off templating in the SPA. Appending ?showKeys or something to the url would print the keys instead of the content, helping the editors know which article to edit for that element on the page.

intellectronica
1 replies
7h34m

SQLite on the server. Did this again and again since the mid 2000s, each time getting reactions ranging from ridicule to horror, and each time achieving great results.

dan-g
0 replies
7h18m

Thankfully these days not as unheard of as it once was! https://fly.io/blog/all-in-on-sqlite-litestream/

gigatexal
1 replies
12h57m

This was such a fun thread to read. I love seeing people just come up with solutions however inelegant. I do this myself: get something working and then circle back to make it more maintainable etc.

While as a new DBA for Microsoft SQL server on a team of very seasoned DBAs with decades of experience each I wrote a small Python script to fix a replication bug that MS refused to fix.

The bug iirc was something related to initial snapshot settings such that by default a setting was set to off so replication would fail. They would normally go and edit this file manually when this happened. When it really blew up it could be editing this file in tens of locations! It was just something they had consigned themselves to doing since it happened just infrequently enough for them not to invest time to fixing but when it did happen it could take an hour or so to do and yeah my noob self thought that’s far too tedious I am not doing that.

The bug really should have been fixed by MS and the flag should have been set to on and my script would just find the relavent unc share and text file and basically do a find and replace of that line and toggled the flag. And then replication would work again. I could point the script to a server and it would figure out the path and do the work. All I then had to do was enumerate the servers that were affected and it was fixed in no time.

This fix was so popular when I showed it internally that they asked me to turn it into a GUI application. It was awesome. I learned a bit of C# and from what I heard a few years back my little tool was still in USE! Huzzah

nonethewiser
0 replies
12h49m

In some ways this is also a “nothing is more permanent than a temporary solution.”

falcor84
1 replies
1d5h

In a previous role, I automated an unholy amount of business processes by adding doGet() / doPost() handlers to expose google sheets as basic web services. It's a bit slow for large sheets, but was quite nice to work with and troubleshoot, and the built-in history in google sheets allowed me to experiment with little risk of data loss/corruption.

davedx
0 replies
1d5h

Investment banking analysts: “this is a hack??”

ess3
1 replies
8h35m

As a kid I acquired my parent’s login to the school platform meaning I could call in sick myself. However one day I actually got sick so they had to call it in which means they would’ve seen all previous calls.

So I downloaded the HTML for all pages required for this exact flow and removed the previous sick days. I then changed my etc/hosts file, gave them my computer and prayed that they wouldn’t try to visit any other page than the ones I downloaded.

Worked like a charm. Later I called in sick myself.

gavinray
0 replies
1h43m

Absolutely brilliant

I used to do the same with school report cards, which began being delivered electronically when I was in Middle School ;^)

emeth
1 replies
8h24m

B2B startup in the sales/financial space. We were solely targeting Salesforce customers in the USA at the time, who conduct all transactions in USD ($).

So sprinkled throughout our very large app were thousands of dollar signs - some rendered on page load in the HTML, some constructed via JS templates, some built on graphs in d3.js, and some built in tooltips that occassionally popped up on the aforementioned graphs.

One day, a Sales guy pops in with "Hey, I just sold our app to a big player in Europe - but they need the currency to display in Pounds instead of Dollars" (might have been pounds, might have been some other European currency - memory is a bit hazy).

CEO steps in and supports Sales guy, says their demo starts in a few days - and the demo needs to take place with the client's data, on their instance, and show Pounds instead of Dollars.

Wat?

Small dev team, 5 members. We gather and brainstorm for a couple hours. Lots of solutions are proposed, and discarded. We get access to client's instance to start "setting things up" and poke around a bit.

We discover that all the field names are the same, and SF was just storing them as numbers. No currency conversions had to be done. We literally just needed to display the pound symbol instead of the dollar symbol.

One of the devs on my team says "Hey guys, I have a dumb idea..."

In short, he remembered an extension from back in the day called "Cloud2Butt". When you loaded a page, it would scan all html on the page and transparently and instantly replace all instances of the word "Cloud" with the word "Butt". Recollecting this, the dev wondered if we could look at their code, and write something similar to crawl the DOM tree and just replace all dollar symbols with pound symbols. The resulting "fix" would then just be a simple JS function we put on top of our stack, instead of refactoring thousands of files.

So... we tried it. With one small addition (making it do it on a setInterval every 100ms, which took care of the tooltips on the graphs) it worked flawlessly. We intended it as a stopgap measure to buy us time, but there were no complaints so we just let that run in production for several years, and the app expanded to several more currencies.

hyperdimension
0 replies
2h38m

The cloud2butt user script is great. It also translates "the cloud" to "my butt" for bonus fun.

diarrhea
1 replies
1d6h

`envsubst` on a k8s manifest, for templating. The space for templating/dynamic k8s manifests is complex, needlessly so I felt. But this... just works. It has been running in CI for a couple months now, deploying to prod. I'm sure the day that breaks down will come, but it's not here yet.

salamander014
0 replies
13h10m

Are you me?

I’ve been working on the same thing for a few months now.

Not only is it more customizable / less complicated than helm / other solutions, but GNU gettext is almost 30?! years old at this point, and environment variables are probably realistically double that age. They aint going anywhere anytime soon.

Plus I feel that more complex logic removes value from the configs we are building, and so am not interested in many other tools.

baz00
1 replies
1d6h

I made my own version of AWS workspaces inside AWS because workspaces is a buggy piece of shit and the client sucks. It's just an EC2 instance which can be started and stopped by a makefile that runs awscli and I query the IP address and open it in MS RDP!

justin_oaks
0 replies
14h12m

At my company we used AWS workspaces for training classes. Whenever we'd have a class, we'd create 1 or 2 spare workspaces. If someone couldn't connect to their workspace, we'd give them the connection information for one of the spares.

I was so happy when we reworked our classes to no longer use AWS workspaces.

andrewaylett
1 replies
5h5m

I was working on a graphing platform for a bank, back in the days of Adobe Flash. We were replacing server-rendered static charts with charts rendered in Flex, but as we neared completion and entered testing the client noticed an issue: they had a batch job that generated PDF reports containing charts, but the PDF still used the old server-generated charts!

Given that the new charts were rendered on the client, this seemed to be an impossible ask -- certainly the client didn't expect us to be able to solve it, once they realised their mistake.

I bodged together the standalone Flash player with a headless XServer and some extra logic in Flex, so it would repeatedly hit an endpoint, render a chart with the data returned, then post it back up to another endpoint. It took a couple of rounds of back-and-forth with their IT folk, but it worked! And we heard a couple of years later that it was still running happily in production.

For several years I left "Adobe Flex" off my resume, I hope it's dead enough now that I can safely admit to having known how to develop for it. I'm still quite proud of having invented the monstrosity that was "Server-side Flash".

junto
0 replies
1h51m

It’s still alive on Adobe’s website at least

alasdairking
1 replies
18h24m

ZX Spectrum BASIC. Numbers could only be 8 digits, needed more for a Spacemaster RPG ship designer program I wrote for my friends. Came up with storing values as strings and splitting/manipulating them as numbers when required. About fifteen years old. Probably the smartest thing I have ever written, grin.

treesknees
0 replies
15h16m

Hey, sounds like most of my Advent of Code solutions :)

YaBa
1 replies
1d6h

A bunch of SQL triggers and procedures to overcome software limitations and workaround certain bugs which the developers won't fix (3rd party app).

quickthrower2
0 replies
1d5h

Reminds me when we started implementing features as an Oracle trigger. It was meant to be “just a trigger” but then there are so many edge cases when you do an end—run around application code that it took a couple of week total. Boss was like “couple of weeks for a trigger!”

Tknl
1 replies
13h21m

Back when I was a junior engineer I made a small system for dealing with industrial printers that took either direct printer language commands or pdfs and exported the printer driver output to a file via the Redmon printer driver output file exporter and then ftped it to the printer over the network. This avoided manual driver installation of hundreds of printers and may still be sold in a new project. It's hacky af but still works and beats alternatives.

EvanAnderson
0 replies
12h36m

I love Redmon!

I had a Customer who used Zebra industrial label printers for labeling product. The print jobs came from one of their Customers' ERP servers, sent over a VPN directly to the production line.

The Customer might send hundreds of labels in a single job. If the roll of label stock ran out during the job their server would re-send the job from the beginning after the printer was reloaded. This meant somebody had to find and dispose of the duplicate labels (or risk re-using a serialized label).

The Customer said that they couldn't modify the ERP software that was composing the jobs.

A friend and I wrote a parser for the Zebra "ZPL" printer language to ingest the large jobs, split them into single label jobs, then shoot those single labels into the printer. We used Redmon to intercept the jobs coming from the Customer's ERP server into an LPR queue on a Windows server machine. Redmon would hand the job off to the label splitter.

NickM
1 replies
1d5h

When I was a teenager I had a friend who wanted to build a PC on a very limited budget, and she wanted it to be able to play The Sims 2. Well after much bargain hunting and throwing ideas around, we couldn’t find a way to afford every component we needed, but we were close, so we decided to forgo a case! Just put the motherboard on the desk with other components arrayed around it. Cables everywhere. The tricky part is we had no power button, but I showed her which pins to sort out with a paper clip, and it worked great.

Scoundreller
0 replies
8h31m

Runs much cooler that way for the most part.

Mr_Twinkles
1 replies
9h3m

I once had a CD that I really needed to make a backup of, but it was scratched to hell and truly unreadable. I put it in a pot of boiling water for a few minutes, and presto: the CD was readable for a while, letting me back up the files. It'd cool off and become unreadable again, but I'd just put it back on the stove and repeat until I had all of the data.

Narciss
0 replies
5h42m

Wow, was not aware of this hack. Really cool! (no pun intended)

ChicagoDave
1 replies
13h59m

I was pitching a “Slack” app five years before Slack was released. Everyone I pitched to thought it was stupid because people already used IRC.

eps
0 replies
7h17m

Slack was an internal side project for something else they were working on, so they too weren't thinking much of it at first.

zwnow
0 replies
1d6h

I use a table object and a OnAfterModifyRecord trigger to process OData calls in Navision 2018 (ERP System). For some reason I can not call actions manually so I write whatever I want to do into a table and process accordingly with triggers.

wruza
0 replies
1d5h

I had to connect an old accounting system to a web app with enhanced ui (an operator determines a payment on a visual graph of contracts between companies, plus graphs editor). There were two ways: a separate db with periodic sync, and a direct COM-connection to the old app, which was scriptable through js<=>COM library. I chose the latter, tests worked fine.

After a month or so I started to notice that something is wrong with performance. Figured out that every `object.field` access through a COM proxy takes exactly 1ms. Once there’s enough data, these dots add up to tens of seconds.

_<

Instead of doing a rewrite I just pushed as much of js logic as possible beyond the COM, so there’s only a constant or little amount of `a.b.c` accesses on my side. Had to write a json encoder and object serialization inside the old app to collect and pass all the data in one go.

The web app was abandoned few months later for unrelated reasons.

webmaven
0 replies
1d2h

One of the first things I built as a developer at the first startup I worked for (circa 1998 or 1999, I was originally hired as a graphic and web designer) was a system I wrote in Allaire ColdFusion that used Macromedia Flash Generator to render and save custom graphic page headers and navigation buttons for e-commerce websites by combining data stored in an Access database with Flash templates for look and feel.

volkadav
0 replies
6h23m

Kinda mundane but it fits with the theme: well before 9/11, a client of the small web agency I was at wanted full drag and drop file management for their company's driver/manual download site, all integrated with their web admin portal. this would've been hilariously complex to implement "web native" with the browsers of the time, and their budget was ... not large. so we slapped an iframe in that part of the admin portal with the url pointed to ftp://admin:whatever@dl.theircorp.com. dumb, but it worked and took like a minute to hack out; client was happy. so it goes?

unwise-exe
0 replies
11h38m

Coworker: hey, what's up with this code? We're having issues getting it to do X.

Me: oh yeah, I just hacked it together and never finished cleaning it up to make sense and be adaptable. Here's enough context to hack it again quickly, or if you have time here's my old notes on how to do it right.

.

I.e., stupid things that seem to work for now, tend to turn into technical debt later.

unnamed76ri
0 replies
5h10m

I once unfroze a pipe by packing Hot Hands hand warmers around it and burying it in horse poop.

trws
0 replies
1h32m

Running the largest workload I’ve ever personally launched (4,200 nodes, 144 cores each, 16,000 simultaneous jobs with a mix of one 1,000 node cpu job, 1 AI selection service on one node, about 3,200 nodes worth of 4-core cpu input pre-processing jobs and 4 GPU jobs per node co-located with all the CPU stuff) at 2am the day of the deadline for something about 50 of us had been working on for most of a year, realizing something was set wrong in the launcher. It was our last chance for a full run, we couldn’t start it over or try again, and it was going to fail because of a single runtime value.

I attached gdb to the launcher, “print <var>=<value>” and detached. The run started going about 10 times faster and we got the whole thing done. Crazy, dangerous, but it worked.

Question also made me think of the last-minute change we needed to make to a database’s structure to avoid about 6 million lost user updates when all DB admins were out with no password. That was a fun one too. Not sure I should admit how we managed it.

throwaway667788
0 replies
15h18m

My organization has a firewall policy straight outta the 90s. They'll only allow for for static IP to static IP traffic rules over single ports. This is in conflict with modern cloud CICD where you don't know ahead of time what IP you're gonna get in a private subnet when doing a new build.

Our work around was to configure HA proxy to be a reverse load balancer and do creative packet forwarding. Need to access an Oracle database on prem? Bind port 8877 to point that that databases IP on port 1521 and submit a firewall rule request.

throwawaaarrgh
0 replies
7h58m

At a hacker con we built a public cluster of terminals. One set of terminals was a bunch of 486's that net-booted a mini Linux distro with just a kernel and an X client, connecting to a hardened X server.

I left a root prompt in one of the VTY's so people could mess around with it, but it being a net-booted BusyBox shell, there wasn't much to play with. We weren't concerned about MITM, because the network was token ring - in the mid 2000s. Convention full of hackers, and a bunch of machines with root shells open, but nobody hacked it, because nobody could get on the network.

We shipped all the gear up on an Amtrak from Florida to NYC. We had no budget nor interest in shipping them back or storing them, so we stacked them on the sidewalk and started yelling about free computers. In two hours they were gone.

taneq
0 replies
13h15m

In my youth I did VJing at a local night spot, and one time (the first time?) my PC didn't boot up (not uncommon after being tossed in a car and driven 40km). Naturally I didn't have a monitor with me to debug it (since my 19" CRT weighed about as much as I did), only a composite video output card which didn't work until after the system had booted. So after pondering my predicament for a minute or so, I unplugged the computer and gave it a sharp smack on the side of the case. This re-seated whatever had been un-seated, it booted fine and from there the gig went off without a hitch.

swader999
0 replies
5h21m

Almost anything with autohotkey. Never outside of my own pc though.

spacecadet
0 replies
1d5h

Define works? Ive seen stupid and not working but convinced it's working until proven otherwise...

I use to work part time restoring Fiat, Porsche, and VW rares for an old head out in the mid west, lots of "stupid but works" in those old cars... Mercedes Benz once (1980s or so) employed glass containers to solve for fuel pressure problems. Insane coolant loop designs or early Fuel Injection systems that develop "Ghosts" lol...

snom380
0 replies
8h59m

The company I worked for had our office renovated/redecorated. As part of that, we moved offices temporarily, and one of the larger meeting rooms was relocated to a remote building. We didn't have a way of wiring up that meeting room to the rest of our office network, but building management were able to give us internet access in the room.

I wanted our employees to be able to roam to that meeting room transparently without any hassle. I knew that OpenVPN had a layer 2 tunneling mode, that could bridge two ethernet networks over VPN. With two leftover workstations, I set up an OpenVPN server in the main office, and an OpenVPN client at the meeting room. By bridging the OpenVPN interface to the ethernet interface on the client, I was able to connect a switch, WiFi access point and videoconferencing equipment. Everything worked perfectly, with even DHCP requests going over the VPN.

snom380
0 replies
9h22m

Our startup needed to automate our invoicing process. Back in the day there were only a couple of reasonably priced, web based ERP systems in our country.

The one we picked had good API docs, but we didn't read the fine print - API access was a high yearly fee, costing almost as much as the regular subscription fee.

Their web interface functionality for importing orders/invoices from a CSV file, and looking at the browser requests I could see they were simply using the API in their frontend. A couple of hours later, and we had our invoice import job doing POST requests to their login page, getting the right cookies back and uploading invoice files.

Worked fine for years, only requiring a couple of updates when they changed their login page.

slowbdotro
0 replies
1d6h

Sshfs so I can upload images onto my server to send links. :|

sideshowb
0 replies
8h57m

As a kid programming games in BBC basic on the Archimedes, I didn't know how to do graphics with things moving around smoothly, but I found some code for changing the sprite for the mouse pointer and moving it to a desired location on screen.

Guess how I implemented my main character.

scanr
0 replies
9h40m

Restarting a service every night to deal with memory leaks comes up quite often

sandreas
0 replies
1h45m

I recently implemented pure-todo[1] just for myself, because I did not like any of all of the Todo list apps out there...

I chose to not use any boilerplate, frameworks or libraries as long as I can get along. Did not respect any ES6 whatever limitations of browsers and used whatever I wanted to use. Just pure PHP, sqlite, vanilla JS and CSS like in the old days with ES6 flavour here and there :-) What sounded really stupid because I ignored all the fancy frameworks made me learn a lot of things and it also turned out that it just works on my systems (Android, iOS, Linux, Window with Firefox, Safari and Chrome). Maintenance will be a nightmare though...

Caution: Use it at your own risk because there will pretty likely breaking changes in the near future :-)

[1]: https://github.com/sandreas/pure-todo

sam_bristow
0 replies
13h55m

At my previous job I had a number of production test rigs in China for testing PCB assemblies as they came off the manufacturing line. These rigs would sit on the shelf for months at a time and be pulled out to fo a run of a couple thousand boards then put away again.

We wanted to collect some stats about the boards being tested but The internet in the factory was really flakey and we didn't want to pay for a 4G internet plan for a rig that was turned off most of the time.

I eventually went for a cron job that would just try uploading all the local logs to s3 through rsnapshot every 15min. It worked great and was less than 20 lines of shell script.

rincebrain
0 replies
10h58m

A couple years ago, I had an idea for convincing a filesystem to go faster using 2 compression steps instead of one. I couldn't see why it wouldn't work, and I also couldn't convince myself it should.

It seems to have worked out. [1]

[1] - https://github.com/openzfs/zfs/commit/f375b23c026aec00cc9527...

qarue
0 replies
12h23m

I often used to regret buying an item at Costco at high price and few days later seeing discount on the same item.

Later found out that Costco has price adjustment policy if price of an item is reduced within 30 days of purchase.

Created simple app to tell me if the price is reduced. Its not awesome, but it works. I have got a few 100 dollars so far :)

https://reclaimo.vercel.app/

pomatic
0 replies
1h31m

It's the early 90's. A large regional Theatre wants to migrate their billing and customer contact data - all of which is held on an ancient computer the size of a desk to a shiny 286 based PC. Only the proprietary database management application frontend was available; there was no documentation, no programming tools or anything similar, the equipment was just too old (and the internet was still in its infancy, it'd be much easier today). My solution was to knock up a script on a PC that emulated the serial printer that was connected to the ancient system, wire a custom cable between them and then tell the database to run a full report. Took a day or two to run as I recall, and job's a goodun.

pmontra
0 replies
1d5h

Mid 90s, can't remember the tech (VB, C, Java?) In the very last hours before an important demo one of the programs stopped working. Not always, only every second time we run it. No version control, no unit tests. It's obviously some side effect but debugging it before the demo and making changes could make it worse. Maybe it won't even run anymore, anytime. We decide to wrap it into a script that starts it, kills it, runs it again. That worked and made us pass the demo.

pavelstoev
0 replies
15h0m

In the undergrad Control Systems course, I brute-forced Kalman Filter matrix to balance the inverted pendulum on a track experiment. Worked fine.

pards
0 replies
4h46m

I was working on a trading system at a bank that didn't have automated deployments so we relied on the Ops team to manually release code to production on our behalf. Release plans had to be documented in MS Word with step-by-step instructions that the Ops team would literally copy and paste into a terminal. It was the worst kind of Sarbanes-Oxley separation of duties theatre.

This system was built in Java and was launched using a simple shell script that used a `for` loop to build the classpath by looping over all the JARs in the lib folder and appending them to a shell variable.

With the release "process" in mind, hotfixes and patches had to be kept as simple as possible. To release a hotfix we would JAR up the one or two classes that needed to be patched and into a single JAR file, then modify the startup script to pre-pend that JAR file to the classpath so that the patched classes were found first, taking advantage of Java's classpath loading algorithm.

neilcar
0 replies
1d3h

When I was a network support engineer, we had a case where a company had a bizarre & intermittent problem on workstations. Wish I could remember what the problem was but this is 20+ years ago now.

To troubleshoot it, we installed Microsoft Network Monitor 2.0 (this was well before Wireshark...) on a few workstations. NM2 installed a packet capture driver and a GUID front-end. And...the problem went away.

Our best guess was that the problem was some sort of race condition and installing the packet capture driver was enough to change the timing and make the problem go away. The customer didn't want to spend more time on it so they installed NM2 everywhere and closed the case.

I occasionally imagine somebody trying to figure out why they're still installing the NM2 driver everywhere.

myself248
0 replies
2h30m

Before CPU throttling or fan speed control, I wanted my 486 to be quieter when it wasn't doing much.

My siblings and I had built a 486DX/40. Intel's chips topped out at 33MHz until you got into the clock-doubled DX2/50 (which ran its bus at 25 MHz) or the DX2/66 (33). But AMD's DX/40 wasn't clock-doubled, the core and the bus both ran at 40MHz. In the days before accelerated graphics, all pixel-pushing went through the CPU, so this was a very big deal for games.

It also ran hot enough that a CPU fan, optional-but-a-good-idea on slower chips, was an absolute necessity here. But the fan (a little 40mm Crystal Cooler™) would never be described as silent.

So when I wasn't gaming, I'd remove the 3.5" floppy drive from the case (it was in a modular sled, and the cables had enough slack that I could just unplug it), then reach in through the resulting opening, and re-jumper the clock-generator IC on the motherboard. Live, and blind. I knew that moving the jumper-cap one position towards me would go from 40MHz to 8, and that was enough to seriously drop the heat. The first time I did this I was flabbergasted that the machine didn't crash, but evidently it was fine? The no-jumper state was something like 12MHz or whatever, so presumably it was just blipping through that speed as I moved the jumper.

Then I'd unplug the CPU fan and enjoy the silence. This was particularly nice during a long download, where servicing the UART ISR didn't exactly take much processor time.

Even better, the computer room was in the basement, and the FM radio signal from my favorite station was pretty weak. So weak that, if I tuned just a smidge off to the side of the station, my radio receiver would pick up little ticks of interference from the computer, while still being able to hear the music. This meant I could turn off the monitor too, and just listen to music while downloading whatever file.

When the ticks stopped, the UART buffer was no longer triggering an interrupt, meaning the download was done, so I could turn the monitor back on and resume my BBS session, clock the CPU back up and plug the fan back in to unzip the file in a reasonable amount of time, and otherwise get on with my day.

mycall
0 replies
3h16m

To keep my MS Teams from going to "away from keyboard" and my work Windows 10 workstation from screen locking while AFK, I wrote a loop shell script which moves the mouse 1px left then 1px right every 30 seconds. Domain GPO can suck it.

mixmastamyk
0 replies
11h44m

Memory jogged of an “email system” I built on the school training LAN in the early 90s on DOS PCs via Netware and login scripts.

Later I replaced F:\login.exe with my own version written in Turbo Pascal to get the edge over a nemesis or two. :-D

michaelcampbell
0 replies
2h36m

Lowkey compared to some here.

A friend of mine was hit with a "prank" that created a directory hierarchy so deep the OS couldn't get down to the lowest levels of it. Not even `rm -rf` would work, and I couldn't `cd` to the lower levels to start deleting things one at a time.

I realized I could `mv` the top level directory to a sibling of its head, rather than to the child of it, cd into that, delete all the files, then do the same with any subdirs. So I was able to script that and started deleting the files/dirs from the top, rather than the bottom up. Took a while, but worked.

michael_j_x
0 replies
1d2h

I have centralized AC and the wall-mounted control panel is located in a small storage room. I wanted to hack the control panel with an Arduino and a Raspberry pi so I can control it remotely via my Alexa. I ended up buying a switch bot [0] and an IP camera and was done with it.

0: https://www.switch-bot.com/products/switchbot-bot

mcv
0 replies
1d5h

A company I worked for had a website where you could order mobile phones and subscriptions from different providers. This was just a frontend, and behind the scenes, they just ordered them directly from those providers. But those providers had terrible sites still written for IE6 (this was in 2010 I think). And yet those sites where all they had (for some reason; I don't know the full background).

So what happened is: the customer would order their phone subscription on the front end, that would create a job file that would be sent to a scheduler that managed 10 Windows VMs that used a Ruby Watir script to direct IE6 to fill in the data from the job file on the old decrepit website.

It's the most horrific hack that I ever touched (I forgot exactly, but I had to make some adjustments to the system), but it worked perfectly for a couple of years until those providers finally updated their websites.

marginalia_nu
0 replies
7h3m

I built a recipe detector. You can, you know, train some sort of AI model to do this like with fasttext, or maybe roll it old school and do naive bayesian inference, but as it turns out, you can also:

https://github.com/MarginaliaSearch/MarginaliaSearch/blob/ma...

It works annoyingly well.

lelanthran
0 replies
48m

In the days of dialup internet, I used a phone company that offered, between 1900 and 0700, free calls for calls under 30m.

So I wrote a cron job that, at 1905, started a script which sat in a loop, dropping the connection, dialing again, and then `sleep` for 29m, until the time got to 0655.

Very stupid, but it worked, I guess.

leeeeeepw
0 replies
14h19m

Running everything on my own home computers via cloudflared

eBank AI art generator and social media netwrck.com are running locally on my GPUs.

would have cost fortunes on the cloud

kukkeliskuu
0 replies
10h26m

In my first job, a spinning hard disk had been disturbing people for months by making screeching noises. I had been playing a lot with hardware so I knew that pushing the metal part on top of the disk could make that sound disappear. And it did! Told them to keep backups fresh and get a new hard disk.

kukkeliskuu
0 replies
11h33m

I had very tight schedule to finish some courses and only one weekend to so a statistics excercise.

I only noticed after business hours that it is necessary to use DOS software build by the professor, and it is impossible to buy it during the weekends. It was available in the computer rooms, but I had no access. I got a copy of the SW in the net, but needed a license key. It was based on a challenge. Luckily I had a friend who had the SW, and could give me a sample license key. I figured the challange would be based on time, and I was right, it was using epoch as a seed. So I made a script that booted a DOS box, and brute forced one license key a time. It took me a few hours but I succeeded in cracking the SW so I could start the excercise.

kotaKat
0 replies
1d5h

My CPAP's onboard humidifier failed.

I ended up swapping it out to a generic in-line CPAP humidifier, but at the same time, realized I could partially automate the process of refilling the chamber (and not have to keep unhooking hoses) by adding an in-line oxygen tee, some aquarium plumbing, a check valve, and a 12 volt pump and switch.

In the morning I just hold a button and the tank magically refills itself ;)

Introducing Semi-Autofill(tm): https://i.ibb.co/NmDbVvw/autofill.png

(Also: The Dreamstation, while recalled, was personally de-foamed and repaired myself -- I don't trust Philips any further than I can throw them now. I now self-service my gear.)

kiernanmcgowan
0 replies
14h42m

Have a bunch of static data that should live in a database? Just check that sucker into git and load it into memory on boot. Bing bam boom.

kidintech
0 replies
3h51m

While working on an enterprise platform, one of its customers in the banking industry wanted to migrate from on-premises to running it on whatever cloud was popular at the time.

The migration was long, tedious, and overly complicated in its own right (i.e. one proposed solution for the "how do we migrate all data safely across the continent?" question involved armored trucks) but just as we reached the T-1 day mark, I realised we had forgot something.

The customer was regulated by various entities, and so it had to deliver periodic audit logs in a particular format. The raw logs (stored in a cloud hosted bucket) would not be sufficient and had to be parsed; in order to process the logs into the desired format, the customer wrote thousands of lines of code in the platform that I was in the process of migrating. This code could only run on the platform, due to some other esoteric privacy regulation.

So there I was on a Sunday, with :

- a few hours to deliver up-to-date, formatted audit logs to regulatory entities or risk legal action

- raw logs in a cloud bucket that required ingestion and processing

- a new cloud platform that could process the logs but was unable to ingest data from the other provider's cloud bucket (due to some temporary allowlisting ingress/egress issue and this being one of the first migrations onto the new cloud)

- an onprem platform being decommissioned and no longer allowed to process the logs BUT capable of ingesting them

The solution I came up with was to have the data flowing:

log bucket in cloud provider A -> decomissioned platform running onprem -> connector I wrote that evening and had no time to test -> platform running on cloud provider B

The ship was afloat the next morning and everything was in order despite cutting it close; I am now a big fan of exhaustive planning, months in advance.

karmakaze
0 replies
1d3h

Way back I had a friend that wanted his (maybe) "Sargon" chess program to run faster. Luckily it was on the Atari 8-bit and I knew a thing or two. The program seemed to use standard b/w hires graphics nothing super fancy, so I thought I could make a pre-boot loader.

The theory was that the Atari spends a good chunk (30%) of its time for display memory access. That can be disabled (making a black screen) and re-enabled. My pre-boot program installed a vertical blank interrupt handler reading the 2nd joystick port: up/down for display on/off. After installing the handler, the program waited for diskette swap and pretended to be the original program loader reading the disk layout into memory and jumping to the start. Worked like a charm first go.

karmakaze
0 replies
1d2h

My favorite one is probably when I was working at a retail Forex where consumers would try to make money on currencies. There were a lot of support calls where they disputed the price they saw and the price their order was entered. My solution was to log the price when they click the trade button. The interesting bit wasn't that I logged the currency pair and price, instead I did a tree walk of all the Java Swing GUI elements in the open trade window and render them into the log file as ASCII using "(o)" for options, "[x]" for checkboxes, "[text_____]" for text fields, etc. I wasn't sure if it would work as the elements were rounded to the closest line, and sometimes just inserted a line between two others if it was close to half a line in-between etc.

The ASCII 'screenshots' came out beautifully. From then on when a call came it, we told them to use the view log menu item, scroll to the trade time, then they'd shut up quick. A picture is worth a 1000 words indeed.

jwsteigerwalt
0 replies
13h21m

Sales commission sheets that are just text files. Started doing it in 2007 in VBA when it was not a horrible solution. Between mergers, acquisitions, and divestitures of that sales org, I have rebuilt that system so many times. Today, a full serverless solution, but the output is still 90% the same and could be from the 80’s.

jonhohle
0 replies
12h31m

In the early 2000s I worked at an on-campus lab that did 3D printing back when that required machines that were 10s to 100s of thousands of dollars. One of the machines built blocks of paper that were each laser cut and laminated to one another. The company that made it went out of business and there wasn’t any software support. Many of the supported printers had tools to email when various events happened and we used that to page employees to come in and remove finished prints or whatever needed to be done.

This machine didn’t have that capability nor any obvious extension points. I ended up writing a VB app that would pill the serial port, which the machine used to talk to the control system, and if the serial port was busy and then became free, send an email. Email was sent by writing a very simple SNMP client.

That program ended up working for another, lower cost 3D printer that we acquired later as well.

I ended up extending for a 3ᴿᴰ printer to tail its log and look for a message it emitted when prints finished.

We share it with a few places and got one of the printer companies to add email support later.

jjice
0 replies
3h38m

I work on an ETL service that interacts with a good few, semi diverse third party systems. We organize these by individual jobs that get run and a job will process certain types of data and spawn different sub jobs in specific situations.

We wanted to have a compile time map of this data and a graph of the sub jobs. To do this, I tossed together a thirty minute script that took the source of our job functions and then ran a few regular expressions on them to extract the data we needed. It was filthy. Regex on source files just feels wrong. Problem is, it's worked great for the last six months and it's still going strong, so we can't justify changing it. The generated data has been extremely valuable for optimizing data fetching paths for customers.

ivolimmen
0 replies
11h7m

Pentium machines where just available on the market. I just stared working for a company and some customers started calling that their new machines could not run FoxPro with the applications we build for them. When running FoxPro it would crash with the message: "Division by zero". Nobody knew how to fix the issue (also no word from Microsoft or Intel). I told the customers to use SLOWDOWN.COM a tool I used to run the Digger game on 286 machines and that also worked for more modern machines. Customers where happy and my employer was baffled.

iopq
0 replies
12h14m

The fan I bought was exactly the thickness that I had between the bracket and the top panel of a mITX build. But since it's the EXACT thickness, there's no way to put it there.

I ended up sanding the fan's plastic shroud for two hours to get it to fit. It's still in my desktop right now and I won't ever be able to get it out because it's just snug enough to go in.

href
0 replies
10h35m

As a young linux admin I had a webserver crash and fail to boot afterward.

I used a live CD to boot it, but could not fix the boot partition.

Since I was able to read the root partition I chrooted into it and started nginx from there.

It ran like that for a week while I was preparing a replacement.

hobs
0 replies
14h22m

I did a lot of bad database things - one time a client wanted me to upgrade their vendor software system but they had a managed service provider that they were in an active legal dispute with.

So after the MSP basically stopped responding to them they asked us if we could upgrade "without" SA - our company's position was it was their data so anything they wanted us to do with it was fine.

So I had a conundrum - how do I get SA without SA? Well - I knew we had one loophole - we used xp_cmdshell for some critical features of the app, and an unprivileged user can run it.

If you're not familiar with xp_cmdshell it basically is a stored procedure that passes your commands out to a windows shell, but its pretty functionally limited on purpose.

I wanted to copy the data, move it around, make a backup, send that to a place, and so I wrote that code in powershell, then base64 encoded it (because it needed to survive shell encoding problems), then chunked it across the wire(because length problems), reassembled it, and then executed it with xp_cmdshell.

Worked like a charm.

hanselot
0 replies
1d5h

Wouldn't you like to know Microsoft powerbi team...

But for real I'd get fired if I said...

grumblepeet
0 replies
10h24m

Many many years ago I worked for a University that had a huge stock of old 386 and 486 pizza box style PC’s and we were implementing Citrix Metaframe on Windows 2000. It was decided to repurpose the older PC’s as thin client boxes, but in initial tests the users hated seeing the older Windows 3.11 OS booting up. We were in a hurry to get rolled out so I ended up making a small GUI on VB6 with two buttons - Load Thin client system and shut down. This replaced the Windows shell and loaded almost immediately. Took me something like 10 minutes to make.

Years later I went back (like 10 years later?) and they showed me a sleek new tiny thin client box, and when it loaded there was my VB6 screen with the familiar two buttons. Apparently the users loved it and so they had ported it across to the newer devices ever since.

gia_ferrari
0 replies
10h56m

We developed some early Windows 8 touchscreen programs. The desktop touch screens issued to us were early prototype garbage and locked up every couple minutes. So I wired up a MOSFET to interrupt the USB power lines at the touch of a pushbutton on my desk. It had to do so for a very specific length of time to ensure a proper monitor reboot. I used that for a year. Think I left it with a co-worker when I left. You could plug anything you wanted into its usb port if you needed a quick reset button.

gia_ferrari
0 replies
8h17m

Two car stories.

1. I got to my car and the battery was flat. Fortunately, I had ridden my home-built electric skateboard to it (this was before even the first Boosted Boards came out - completely homemade, I built the trucks, drive system, etc). I went to the Goodwill next door and bought an extension cord for the wire. I stripped the cord, wired the board's battery to the car, and was able to start it.

2. I was driving my classic car home when the alternator failed, at 2am, in the middle of a big city. With my headlights rapidly dimming, I managed to quickly exit and find a parking lot (just as my headlights completely died). Fortunately, I again had that skateboard. I rigged up the battery to the electrical system and made it to within a mile of my house before the fuel pump and spark ignition gave out completely. I easily walked home, put a new battery in my backup car, and I was good.

The moral? Replace your car batteries when they're weak :) Also, LiPo batteries are beasts.

garyfirestorm
0 replies
13h16m

In my previous roles our lab used archaic lotus notes software which had information about which physical tests are being performed by whom. You needed to boot your laptop get on VPN and open this clunky mess of a software program to check if your test was scheduled or not.

I figured out a way to have a desktop computer on the intranet, a python script to scrape lotus notes database and some way to push all this data to Teams list every 5-10 mins. Eventually the teams api team shut my access but fwiw I managed to get the list to sync with a teams channel for few months straight. Hacky, stupid and dumb. that’s how everyone could check the status without needing to open an app.

forinti
0 replies
5h58m

About the year 2000, I wrote a web app in Perl that used a file-based hash as a database.

This came about because there wasn't any database on the server and sqlite hadn't come about.

This solution worked quite well for more than 20 years. The file grew to host hundreds of thousands or orders.

My only regret is that I should have charged more. The ROI is unbelievable, the amount I charged is a rounding error. The thing got replaced when the original owner passed away.

flybrand
0 replies
15h42m

Running an industrial machine installation and my Eastern European colleagues looped a 200 meter tape measure around the line 4 times to get a more accurate measure.

fallinghawks
0 replies
1d5h

My mom's place (about 100 miles from me) has a water heater that's of an age where it could fail, so I put together a Pico W and a water sensor. I had it notify me daily just to make sure it was still working. And for reasons unknown, every 8 days it would stop notifying. A reboot would resolve it. We tried logging errors and having it report upon reboot but I wasn't versed enough with Pi to figure out anything more than it being an HTTP POST error. So I changed the code so when it got to that error instead of logging it would just reboot itself, and all has been smooth since.

essayist
0 replies
12h16m

It's 1996 or so. The web is new. The Bureau of Transportation Statistics (BTS) at the US Department of Transportation collects ontime arrival data for all the major airlines, by flight. It publishes some summary reports, but what people really want to see is how all the flights from, say, JFK to LAX performed in a given month.

The monthly database textfile is not that large, but it is unwieldy.

I'm a web consultant, but database backends are not yet a thing, at least not for us. Static webpages, all the way down.

So I use a script to parse the database into a series of text files and directories. E.g. JFK/index.html is a list of all the airports receiving flights from JFK, e.g. LAX, SFO, etc. And JFK/LAX.html is that month's results for all JFK to LAX flights. Etc.

As I recall, once I'd worked it out, it took 15 minutes to generate all those files on my Mac laptop, and then a little ftp action got the job done. Worked great, but someone did complain that we were polluting search results with so many pages for LAX, SFO, etc. etc. (SEO, sadly, was not really on our radar.)

That was replaced within a year by a php setup with a proper Oracle backend, and I had to explain to a DB admin what a weighted average was, but that's another story.

ericHosick
0 replies
10h20m

A long time ago I was building a data entry system is Visual Basic for forms that, once entered, need to both store the input in a database and print out the form. There were hundreds and hundreds of different forms.

So, instead of making an interface for data entry and then a system to print the forms, the data entry UI for each form looked exactly the same as the forms themselves. Scrolling was needed because at the time there were only low resolution CRT screens.

However, for printing, I would draw the filled out form at a very high resolution in video memory "off screen" and print that.

So, the work to create one form resulted in supporting both data entry and printing.

It turned out that since the people doing the data entry also knew the forms really well, they were able to enter the data 2.5 times faster than initial estimates.

ebcode
0 replies
11h55m

In 2012, Intel had a site called esaa-members.com. It was supposed to be some type of authorized hardware catalog, with recipes for building computers out of known-to-be compatible parts. Anyways, there was a daily import of the hardware data, which was kicked off by a small Ruby script that downloaded some CSV file over SFTP. It was tiny, and all it did was put the CSV in a folder, to be processed by another script down the line. Occasionally though, there would be a network error, and it wouldn't write the file. When this happened, the next script that looked for the file wouldn't find it, and would send out an email alert.

That's when I discovered that you could write a GOTO statement in Ruby! I made a very minor addition to the beginning and end of the script, a label at the top, and a goto at the bottom for if the CSV file didn't exist. I had added my email to the list of alerts, and after that GOTO was added, I never saw another alert.

davidbiehl
0 replies
14h23m

I used to work for a small medical device distributor. We used an e-commerce platform that, at the time, didn’t have an API. We wanted to synchronize our products with their platform as products were added, discontinued, copy/image changes, etc.

I ended up using capybara (a Ruby gem for writing browser automation instructions for system tests in web apps) to automate all of the “clicks” to make the updates in the system.

It actually worked pretty well. It wasn’t fast, but it was better than having a human keep the data in sync.

After a few years, the platform released a REST API, and we transitioned to that. But browser automations worked great in the meantime!

edit: spelling

cracrecry
0 replies
8h17m

I bought a lot of DRM books that I could only read in restricted platforms in Windows. I wanted to use in any of my devices without limitations. So I created a VM image of windows inside VMWare inside a Linux computer and basically automated the scanning of the pages in ultra high resolution just using keystrokes from a Lisp Domain Specific Language.

The first iteration hack was done in minutes. over time it could OCR almost anything and convert to ebook readable.

cr3ative
0 replies
1d6h

I was part of a team which had to make web interactives for an old desktop-only Java-based CMS which published out to HTML. This was back before cross-publishing to formats like Apple News was important; we only had to worry about things working on the browser version.

The CMS didn't support any kind of HTML/JS embed, and had quite a short character limit per "Text" block. But luckily, that block didn't filter out _all_ inline HTML - only some characters.

So, a bootstrap "Loading" element was inserted, along with a script tag which would bring in the rest of the resources and insert those in to the page where the bootstrap was placed. This quickly became a versatile, re-usable loader, and allowed us to launch the features. But all this grew from a very inelegant hack which just happened to work.

cdbattags
0 replies
9m

I worked for an education technology company that made curriculum for K-8. There are long sales cycles in this space and different departments of ed have different rules. Think "vote every 4 years because our books are out of date or just old". The technology wave came fast and most of this curriculum from incumbent providers was formatted to fit in a book with maybe some of the most cutting edge people having a large InDesign file as the output.

The edtech company I worked for was "web first" meaning students consumed the content from a laptop or tablet instead of reading a book. It made sense because the science curriculum for example came with 40+ various simulations that helped explain the material. A large metropolitan city was voting on new curriculum and we were in the running for being selected but their one gripe was that they needed N many books in a classroom. Say for a class of 30 they wanted to have 5 books on backup just in case and for the teachers that always like a hardcopy and don't want to read from a device.

The application was all Angular 1.x based that read content from a CMS and we could update it in realtime whenever edits needed to be made. So we set off to find a solution to make some books. The design team started from scratch going page by page seeing how long it would take to make a whole book in InDesign but the concept of multiple editing doesn't really exist well in this software. Meanwhile, my team was brainstorming a code pipeline solution to auto-generate the book directly from the code that was already written for the web app.

We made a route in the Angular app for the whole entire "book" that was a stupid simple for loop to fetch each chapter and each lesson in that chapter that was rendered out on a stupidly long page. That part was more less straightforward but then came the hard part of trying to style that content for print. We came across Prince XML which fun fact was created by one of the inventors of CSS. We snagged a license and added some print target custom CSS that did things like "add blank page for padding because we want new chapter to start on the left side of the open book". But then came the devops portion that really messed with my head.

We needed a headless browser to render out all of this and then we needed the source with all the images, etc to be downloaded into a folder and then passed to Prince XML for rendering. Luckily we had a ECS pipeline so I tried to get it working in a container. I came up with a hack to wait for the end of the rendering for loop for the chapters/lessons to print something to console and then that was the "hook" for saving the page content to the folder. But then came the mother of all "scratching my head" moments when Chromedriver started randomly failing for no reason. It worked when we did a lesson. It worked when we did a chapter. But it started throwing up a non-descript error when I did the whole book. Selenium uses Chromedriver and Chromedriver is direct from Google and Chromium repo. This meant diving into that C++ code in order to trace it down when I finally found the stack trace. Well yeehaw I found an overflow error in the transport protocol that happens from Chrome devtools as it talks to the "tab/window" it's reading from. I didn't have the time to get to the bottom of the true bug so I just cranked the buffer up to like 2 GB and recompiled Chromium with the help of my favorite coworker and BOOM it worked.

But scaling this thing up was now a nightmare because we had a Java Dropwizard application reading a SQS queue that then kicked off the Selenium headless browser (with the patched Chromedriver code) which downloaded the page but now the server needed a whopping 2 GB per book which made the Dropwizard application a nightmare to memory manage and I had to do some suuuuper basic multiplication for the memory so that I could parallelize the pipeline.

I was the sole engineer for this entire rendering application and the rest of the team assisted on the CSS and styling and content edits for each and every "book". At the end of the day, I calculated that I saved roughly 82,000 hours of work because that was the current pace of how fast they could make a single chapter in a book multiplied by all the chapters and lessons for all the different states because Florida is fucked and didn't want to include certain lines about evolution, etc and so a single book for a single grade but for N many states that all have different "editions".

82,000 hours of work is 3,416.6667 days of monotonous, grueling, manual, repetitive design labor. Shit was nasty but it was so fucking awesome.

Shoutout to John Chen <zhanliang@google.com> for upstreaming the proper fix.

calvinmorrison
0 replies
1d5h

Getting a ddos attack and just running a few iptables by hand mostly fixed it until upstream blocked it for us

calamari4065
0 replies
14h3m

I was working on a video game, and we had a few hundred player hosted servers. The default way that steam handles server listings is not super great, and there was quite a lot of metadata for each server I wanted to filter by.

I didn't know how to set up a centralized service to handle server discovery ourselves, so I had each server serialize, compress, and base64 its metadata and store it in some "rules" field in the Steam API. Problem was that the rules field was a list of indeterminate length of strings of indeterminate length. Absolutely no documentation, so I had to brute force it to find the limits. It was just barely enough.

So the client would fetch the full list of servers, filter on the few parameters steam natively supported, then they'd fetch the metadata for every remaining server.

Honestly I feel really bad about this one. It was a bad solution but it worked for years.

bradhe
0 replies
7h27m

Once worked IT at a small company in college. Needed to run CAT5 to a new building we acquired. There was no conduit, but there was twisted pair for telephone running between the buildings. Occupying 3 lanes on each side gave us Ethernet on the new building :)

bonton89
0 replies
1d3h

I used to be really into modding the game Jedi Knight: Dark Forces II. The quirky engine has all sorts of weird bugs and limitations.

I created a flare gun weapon (similar to the stick rail gun missiles so nothing to crazy here) but found that if a player died when they respawned the flares were still stuck on them and damaging them when they respawned even though their whole location had changed. This bug would exist with rail gun missiles as well but since the death animation was long and the fuse so sort it would never present in the base game.

I experimented with using detach commands that ran on player death but they'd just instantly reattach to the player model because of their proximity. I ended up creating an invisible explosive entity that fired on player death from the center of the player which did a damage flag ignored by players but which destroyed the flares.

bongodongobob
0 replies
10h11m

I was a remote sysadmin for a metal fabrication shop that backed up their CAD/machine files to a NAS daily. They had a recurring issue with their backups randomly failing maybe half a dozen times a month. I was told it was nothing to worry about, we've looked into it, tried everything, etc. Client says it's not a big deal. Until it was.

They had lost some file, of course, and needed to pull it from yesterday's backup. "Ok, sure I'll just restore it for you." Nope, backups failed the previous night. I don't know exactly what it was, but it was a huge deal and I was tasked with figuring it out, that was now my job.

The first thing I did was look at the log history. The only pattern I saw was that backups never failed on Monday nights. Huh.

I have no idea what that could mean, so I move on and write a script to ping the NAS from their server using task scheduler every 5 minutes and write a failure to a log. Maybe it's just offline, I have no idea what the cause of the failure is at this point.

A couple weeks later, the backup fails and I check the log. Sure enough, the NAS dropped off the network overnight, and came back online in the morning. So I call my contact (he was their CAD guy, technical enough to be able to help me check things out) and ask if anything happened overnight, power outage, anything. He isn't aware of anything. The NAS is online, uptime is good, hasn't been power cycled in months.

I have him look at it and there's a MAC address sticker on it so I'm able to trace it back to the switch. Check the switch, sure enough, disconnected during the time shown by my ping log. I have him plug it into a different port, replace both the patch cable and cable connected to the NAS, and disable the previous port. And wait.

The next time it happens, I was able to talk them into buying a new NAS thinking it has to be the NIC on it. It's about 3 years old so it's an easy sell, should probably replace it anyway if it's that important. We ship it out, they pop it in, we transfer the data, and we wait.

Happens again.

So now it this point we are talking about replacing switches, routers, and firewalls. I get 3 different vendors involved. No one is seeing anything out of order, and all their hardware is out of warranty.

At this point, the network has been wiresharked to death and everything looks great, absolute dead end. Customer is not happy about having to potentially spend $10k on network gear we can't even prove is bad so I get routed the on call for this backup failure.

It happens and I drive over there at 3AM.

I arrive, find who is in charge, and they direct me to the network closet. I find out that the NAS is not here. I ask about the NAS. He says oh yeah, that's in a different room.

He beings me there and it's in a closet down a flight stairs from the owner's office with the network cable running under the door. The cable is laying on the floor in the office, end completely stripped.

Turns out, that was on the route to some supply closet that only third shift used. Third shift was 4 10's, Tues-Sat. They were tripping over the cable and the owner was plugging it in when he arrived in the morning. He was out of the loop for the whole thing so had no idea what was going on. He said it didn't seem to affect his "internet" so he never mentioned it.

So what was the hack? I threw a rug on the cable and drove home.

(Yes we did move it later, but there were some network changes out of my control that needed to happen first.)

bnny
0 replies
4h15m

I used to work as a Network Administrator, the team I was on managed something like 200-300 L2/L3 switches and half a dozen core routers.

Whenever a new device was connected, the people who ran the ethernet for us were nice enough to connect patch cables to the building switches. The on-site techs would go setup whatever was connecting and we'd go hunting through disabled ports for one that came up with the matching MAC. This could take up to 30 minutes depending on the size of the switch.

One day I had enough time to scrape together some VBScript in an Excel document we used as our day-to-day documentation of our management IPs. It would snag the list of disabled interfaces from your clipboard, run a simple regex, generate a command to select all the interfaces, and shove it back into your clipboard.

It was disgusting, but it also changed 30 minutes of mind-numbing work with the on-site techs sitting on their hands into around 5. It stuck around for about 3 years.

bmitc
0 replies
9h53m

There was once an API for an instrument, I believe using COM, that was primarily designed to build an interface with. It had very limited actual functionality to get data from the API, whereas most data was simply to be displayed on user interface elements. We needed a piece of data, programmatically to do something with, that was only available via a user interface element. So I wrote and trained a vision application that would OCR the numeric indicator by taking a programmatic screenshot of the user interface element and then return the value. So at the top level, one simply made a function call that returned a number, but below, it was doing the UI display (hidden), screenshotting a region of the hidden window, OCRing it, and then returning the number.

You'd be surprised at the hacks required when interacting with scientific instrumentation. I am not a hacker at heart, but I do take pride when I'm able to wrap a hack such that you'd never know what it was doing underneath. Leaky hacks are no fun.

blowski
0 replies
1d6h

Generated HTML email newsletters from Excel (in 2004).

It was a big old-fashioned bookseller trying to compete with Amazon. Software and the web was locked down tight, but they opened a daily report in Excel, and I built a VBA macro that generated the necessary HTML and published the images to an FTP server. Turned a 2 day job into a 10 minute one.

bitwize
0 replies
13h9m

I once worked on a project for one of the largest vendors of chemical lab equipment. The application was in... frickin' Microsoft Access, not my choice. This would've been the 90s. The idea is that the sales person would enter in what the lab needed to do and the database would show them the equipment they needed.

Anyway, we eventually developed a feature that allowed the application to copy out the lab process and equipment list to a separate database, zip it up, and FTP it to a server. It would also export a CSV file with the name of the lab, salesperson, date of sale, and other searchable information and FTP that up too. I wrote a "web service" (this was late 90s, before that term was cool) that would collect up these CSV files, aggregate them in one big CSV file, and then from within that application that CSV file could be searched and the appropriate ZIP file downloaded and merged with the local database. It was written in Perl and ran as a CGI script on some internal Windows NT machine's IIS instance.

It was janky as all get-out, but it worked and we did it a couple of years before the big web services mania hit.

ben0x539
0 replies
1d5h

I wrote it up in a bit more detail[1], so I'm giving away the punch line here, but I used to use some cursed bash wrappers to smuggle my bashrc and vimrc along on ssh sessions to mostly-ephemeral hosts by stashing them in environment variables matching the LC_* pattern allowed by default for pass-through in debian-ish sshd configs.

[1]: https://gitlab.com/-/snippets/2149340

beders
0 replies
13h0m

Too late for the bonanza here, but back in the day I owned an 8-bit Amstrad CPC 464 machine which came with a built in cassette deck.

Eventually a 3"(!) disk drive was launched which needed some extra space in the upper memory banks to host its driver.

This made it almost impossible to copy certain games (ok, it was Spindizzy) from tape to disk, since there was no longer enough memory to load the game without overwriting the disk driver.

Almost impossible, until I split up the loading process and used the only other remaining RAM possible: the graphics buffer. So while loading the game your whole screen got distorted pretty bad, but it worked: I copied the graphics buffer right over the precious disk driver and the game worked just fine.

andylynch
0 replies
8h17m

Needed a hard drop copy of data from an industrial metering device for auditing and backup.

It was the handover point for the delivery of natural gas for a large field and a quarter of the country’s energy supply.

The engneers wired an extra serial port onto the lead and plugged it into a printer we had going spare.

alex3305
0 replies
8h29m

At university we were tasked with making an Android application for a real client. This was part of our course and was really fun to do. Especially because we did this in two groups competing with each other. The app was some sort of geocaching mixed with a quiz to discover a city in groups. We also built a server side app that accompanied the clients. This app was targeting Android 2.3 and ran on some sort of old Xperia device.

Building the app was a lot of fun and it worked pretty well most of the time. During beta testing however, we were given all the resources that were crreated by a third party. This mostly included UI elements and other images that made up the UI. Testing it out, again, it worked pretty good. Until one time it didn't...

After about an hour or so of playing, the app would consistently crash. After some OS troubleshooting, we came to the conclusion that apparently Android (at the time) had the habit of not putting images in managed memory, but separately. And whenever this space overflowed, an app would simply crash. To resolve this you would need to manage this space yourself and clear out memory.

However we only discovered this a week or so before the deadline. And implementing memory management would be nigh impossible to do. So I came up with the hackiest solution that I ever built. I added a crash handler to the app, which would start another instance. I also added a serializer / deserializer to the app and whenever you reached the main menu all play progress was serialized to storage. Whenever the app crashed and restarted, this was read again and letting the users resume play. The only side effected was some weird app flickering because of the crash and restart.

A week later when we delivered the app to our clients, they wanted to try it out and play test it. So we did, along with the other group. And lo and behold, after an hour or so the app crashed. And restarted. Unlike the other group, were the app crashed and had to be restarted manually.

In the end the client was really happy with the result. Because it just worked. AFAIK the app is still in production the same way it was about 10 years ago.

al_borland
0 replies
11h2m

I was given a spreadsheet with a bunch of data and told I needed to use that data to make a couple thousand Outlook email templates.

We started out having interns do it, but it was taking too long and they were making a lot of mistakes. I ended up writing an AutoHotKey script to copy stuff out of Excel, switch to Outlook, then build the template and save it in the specified format. It required finding a lot of obscure keyboard shortcuts to make it all work, but it got the job done. It was still a bit manual to run, as it was too fragile to let it all go in one go, and I had to watch it. But it turned days or weeks of tedious work into something that only took a few hours once the script was done.

aisofteng
0 replies
7h20m

Years ago my team was tasked with greenfield dev of a cloud native app while the platform/infrastructure was also evolving. We worked nights and weekends to get it done on time only to find out at the last second that the platform team had enforced controls on internal services being able to access the internet, requiring authentication to do so. This was news to us.

We were behind schedule and had, I think, three separately implemented/maintained/deployed services that needed to be able to access the internet to do their work. Rather than implementing the intended auth mechanism in each service, writing tests for it, going through code review, and redeploying, I instead added nginx to the base Docker image they all used, configured them to send requests to that nginx instead of as normal, and made that nginx instance man-in-the-middle our own services to attach a hardcoded HTTP header with the right creds.

I man-in-the-middled my own services as a hack - dumb but it worked. It was meant as a quick hack but stayed for I think a couple years. It did end up being eventually being the source of an outage that took a week to diagnose, but that's a different story.

aisofteng
0 replies
7h30m

Not quite what was asked but a few of the stories here reminded me of this.

Years ago I was working on developing a new cloud native service. The particular microservice I was working on had to call out to multiple other services, depending on the user parameters. Java 8 had just come out and I implemented what I thought was an elegant way to spin up threads to make those downstream requests and then combine the results using these fancy new Java 8 stream APIs.

I realized at some point that there was a case where the user would want none of those downstream features, in which case my implementation would spin up a thread that would immediately exit because there was nothing to do. I spent a couple days trying to maintain (what I saw as) the elegance of the implementation while also trying to optimize this case to make it not create threads for no reason.

After a couple days I realized that I was spending my time to try to make the system sometimes do nothing. When I phrased it that way to myself, I had no problem moving on to more pressing issues - the implementation stayed as is because it worked and was easy to read/understand/maintain.

To this day, I avoid the trap of "sometimes make the system do nothing". One day, that performance optimization will be necessary, but that day has not yet arrived in the ~7 years since then.

WalterBright
0 replies
9h28m

Instead of translating C header files to D so D could use them, build an actual C compiler into the D compiler and then D can import .c files directly.

SkyPuncher
0 replies
1d3h

Adding a Content Security Policy of “upgrade-insecure-requests”. It does nothing meaningful for your security, but it’s enough to satisfy a bunch of these scanning tools that giving you a letter grade.

Yes, we want to add a robust CSP, but we currently have some limitations/requirements that make implementation more challenging.

Seb-C
0 replies
10h41m

About 10 years ago I was working for a French bank, for a small regional team to develop internal tools.

The only way we could access the core database was through a mainframe terminal emulator only available on client PCs across the internal network. It was basically an old school terminal based UI where you had to enter specific codes and sequences of keys to navigate.

It was not supposed to be automatable, and we did not have permission to deploy any executable to computers on the network.

However, we found a way to plug into it by calling an obscure DLL via Internet Explorer's ActiveX. From there, we had access to only two functions: one to send a sequence of key strokes to the emulator, and another which was basically getString(x, y, length).

We built whole applications using this database, only via those two functions, which led to giant procedures for each query we had to do. It was terrible, unstable and slow, and broke at every update, but it did the work.

Sai_
0 replies
12h2m

I have a website, blog, a cdn, and a domain level email inbox running off AWS SES.

You email specific email addresses which get processed as web pages, blog posts, or attachments extracted to the cdn. All other emails sent to your domain sit inside the inbox. You can also send out emails from any address.

The blog and webpages are all SEO optimised so you can share the link on say Twitter and it will unfurl the link and read the meta tags.

You can also forward specific emails to a special address to be shared or bookmarked in your browser.

The entire thing runs off Lambdas, S3, Cognito, and AWS SES, nary a database. I use pug template files to format content extracted from emails.

To make this work, I had to do a deep dive into how Gmail’s email composer translates actions into HTML tags, then align the templates to these behaviours.

For a while, I had a handful of paying customers which paid my AWS bills. Right now, I’m down to one customer and the rest of my uses are for personal projects.

I learnt a lot in the process - from templating to SEO to S3 usage to Lambdas and got a very usable domain level email inbox and blog out of it. The CDN and static pages are a little less useful but building them too was quite fun.

Btw, highly recommend nodemailer as a module for email parsing.

Mister_Snuggles
0 replies
13h39m

At my last job, I built a tool to produce standard documents from our green-screen system that held the client data.

The users of the green-screen system would tell it to print a document by picking which template to use from a menu. The system would generate an XML file in a directory that was shared out via Samba. A VB6 program watched that directory for these XML files to appear, when one appeared it would figure out what the relevant template was, use COM automation to tell MS Word to load the template, fill in the template fields, save it to that client's folder on the file server, then print (on the user's selected printer in the green-screen system) two copies (one for the paper file, one to mail) and an envelope.

There were a bunch of weird word processing practices that made it slightly worse than it already sounds. Each letter that was sent out was actually made of a letterhead (one for each location we operated) and a body with the standard text for various letters we sent. The body would sometimes contain links to other documents (e.g., forms we were requesting someone to fill out), the program would follow these links and print those documents too, but only one copy as we didn't need a bunch of blank forms on the paper file.

There was also an Access database used by this VB6 program to maintain various bits of configuration data - mappings of document codes to filenames, mappings of green-screen printer names to Windows printer names, etc. Access gets a bad rap, but it made maintaining that configuration data a breeze.

It was horrific, but it saved everyone an incredible amount of time.

LorenDB
0 replies
1d5h

I've been using sshuttle to create a VPN to my server. It's a wonderful abuse of ssh.

JohnBooty
0 replies
1d2h

TL;DR: Had no database so I made a PHP page with a hardcoded array of 100,000 coupon codes.

---

Made a PHP landing page for a customer where they could redeem a coupon code they were sent via snail mail. About 100,000 codes were sent out via USPS.

Threw together the basic code you might expect, simple PHP page + MySQL database. Worked locally because customer was dragging their feet with getting me login creds to their webhost.

Finally, with the cards in the mail, they get me the login creds at 5PMish. I login and there's no database. Cards are going to be arriving in homes as early as 8AM the next day. How TF am I going to make this work... without a database?

Solution... I just hardcoded all 100,000 codes into a giant MySQL array. Or maybe it was a hash/dict or something. I forget.

Anyway, it performed FINE. The first time you used the page it took about 30 seconds to load. But after that, I guess `mod_php` cached it or something, and it was fine. Lookups returned in 100ms or so. Not spectacular but more than performant enough for what we needed.

Got paid. Or, well, my employer did.

Groxx
0 replies
10h58m

Not particularly cursed, but my single-file Ruby-script APNS-notification-pusher with Redis ran for a couple years on the first try, and handled around ten million chat-app users on a single core with no trouble. I profiled to make sure it wasn't leaking when I wrote it and... yep, that's all it really takes. Bare-bones code is pretty effective at reminding me that computers are actually pretty fast nowadays.

GlenTheMachine
0 replies
15h17m

TL/DR: I rearranged the address lines on an embedded controller with a razor knife to "fix" a bug on a bus translation chip.

I was writing the motor controller code for a new submersible robot my PhD lab was building. We had bought one of the very first compact PCI boards on the market, and it was so new we couldn't find any cPCI motor controller cards, so we bought a different format card and a motherboard that converted between compact PCI bus signals and the signals on the controller boards. The controller boards themselves were based around the LM629, an old but widely used motor controller chip.

To interface with the LM629 you have to write to 8-bit registers that are mapped to memory addresses and then read back the result. The 8-bit part is important, because some of the registers are read or write only, and reading or writing to a register that cannot be read from or written to throws the chip into an error state.

LM629s are dead simple, but my code didn't work. It. Did. Not. Work. The chip kept erroring out. I had no idea why. It's almost trivially easy to issue 8-bit reads and writes to specific memory addresses in C. I had been coding in C since I was fifteen years old. I banged my head against it for two weeks.

Eventually we packed up the entire thing in a shipping crate and flew to Minneapolis, the site of the company that made the cards. They looked at my code. They thought it was fine.

After three days the CEO had pity on us poor grad students and detailed his highly paid digital logic analyst to us for an hour. He carted in a crate of electronics that were probably worth about a million dollars. Hooked everything up. Ran my code.

"You're issuing a sixteen-bit read, which is reading both the correct read-only register and the next adjacent register, which is write-only", he said.

Is showed him in my code where the read in question was very clearly a CHAR. 8 bits.

"I dunno," he said - "I can only say what the digital logic analyzer shows, which is that you're issuing a sixteen bit read."

Eventually, we found it. The Intel bridge chip that did the bus conversion had a known bug, which was clearly documented in an 8-point footnote on page 79 of the manual: 8 bit reads were translated to 16 bit reads on the cPCI bus, and then the 8 most significant units were thrown away.

In other words, a hardware bug. One that would only manifest in these very specific circumstances. We fixed it by taking a razor knife to the bus address lines and shifting them to the right by one, and then taking the least significant line and mapping it all the way over to the left, so that even and odd addresses resolved to completely different memory banks. Thus, reads to odd addresses resolved to addresses way outside those the chip was mapped to, and it never saw them. Adjusted the code to the (new) correct address range. Worked like a charm.

But I feel bad for the next grad student who had to work on that robot. "You are not expected to understand this."

EvanAnderson
0 replies
12h46m

Scraping output meant for a printer from old software/systems for "ETL" purposes has been very useful. A few of the uses I can think of during my career:

- Apple II-based database used by a choir teacher to track the school music library

- MS-DOS accounting software payroll reports to generate W-2 forms

- Patient records from a pediatric office

- Receipt printer on a gas station pump controller

- Customer transactions and balances from a home heating propane supplier

- Customer transactions from unattended fueling site controllers

You might think this only applies to old software, but often "printing" to a Generic/text-only printer in Windows gives good results.

BiteCode_dev
0 replies
8h46m

In 2010, one of my client employee kept complaining the report she downloaded from our intranet app couldn't be opened.

It was only her machine. It worked on all others.

I looked at the file, and it was a tad smaller than on the other machines.

I disabled her anti-virus, it worked !

Realizing kaspersky was taking a bite at the report, I created a self signed tls certificate so the anti virus could read it.

Worked.

Agentlien
0 replies
8h30m

When I was a teenager my GPU fan broke, but i happened to have a spare CPU fan. I modified the connector so it would fit and just tied the fan upside down under the GPU with a string. It made an incredible amount of noise but worked very well.

AeroNotix
0 replies
6h5m

Worked for a large corporation that is now widely considered a very scummy printer manufacturer.

For whatever reason there was an issue no-one could figure out where if either of two networked computers, attached to printers, couldn't communicate (e.g. ping each other) then the printing from the local machine itself would stop working.

I wrote a small Erlang application which would monitor an Erlang pid on the other computer and restart the network interface in case it ever lost contact, which generally made things work again.

Obviously many ways to solve (such as figuring out why it behaved like that in the first place!!) but I was learning Erlang at the time and it seemed a neat way to do it.

9dev
0 replies
10h32m

I once built a backup script in Batch (Windows Shell scripting language). That was a while before Powershell got popular, and needed to work on everything from Windows 2000 to Windows 7 or server. Additionally, as I was working for a small (read: 5 guys) IT shop with small customers, it needed to be free, while still supporting lots of advanced functionality (automatic diff backups, restore operations, rotation, network drive mapping, configuration parsing, etc.) that ruled out most proper solutions. While it wrapped a low-level shadow snapshot tool, it still got a few hundred lines long, did crazy logic and output stuff, and even required me to create a unit testing framework for Batch files from scratch. If that all doesn’t sound really noteworthy, it’s because you don’t know that curse of a shell :)

0xDEADFED5
0 replies
14h30m

OpenResty http auth for a stats page is a hardcoded password check written in Lua in nginx.conf