Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: I'm a software engineer going blind, how should I prepare?
3270 points by zachrip on April 19, 2020 | hide | past | favorite | 473 comments
I'm a 24 y/o full stack engineer (I know some of you are rolling your eyes right now, just highlighting that I have experience on frontend apps as well as backend architecture). I've been working professionally for ~7 years building mostly javascript projects but also some PHP. Two years ago I was diagnosed with a condition called "Usher's Syndrome" - characterized by hearing loss, balance issues, and progressive vision loss.

I know there are blind software engineers out there. My main questions are:

- Are there blind frontend engineers?

- What kinds of software engineering lend themselves to someone with limited vision? Backend only?

- Besides a screen reader, what are some of the best tools for building software with limited vision?

- Does your company employ blind engineers? How well does it work? What kind of engineer are they?

I'm really trying to get ahead of this thing and prepare myself as my vision is degrading rather quickly. I'm not sure what I can do if I can't do SE as I don't have any formal education in anything. I've worked really hard to get to where I am and don't want it to go to waste.

Thank you for any input, and stay safe out there!

Edit:

Thank you all for your links, suggestions, and moral support, I really appreciate it. Since my diagnosis I've slowly developed a crippling anxiety centered around a feeling that I need to figure out the rest of my life before it's too late. I know I shouldn't think this way but it is hard not to. I'm very independent and I feel a pressure to "show up." I will look into these opportunities mentioned and try to get in touch with some more members of the blind engineering community.




You can definitely continue as a software engineer. I'm living proof. It won't be easy, especially at first. For a while it will feel like you're working twice as hard just to keep up with your sighted peers. But eventually, the better you get with your tools, you'll find you have some superpowers over your sighted peers. For example, as you get better with a screen reader, you'll be bumping the speech rate up to 1.75-2X normal speech. You'll be the only one who can understand your screen reader. You'll become the fastest and most proficient proof reader on your team. Typos will be easily spotted as they just won't "sound right". It will be like listening to a familiar song and then hitting an off note in the melody. And this includes code. Also, because code is no longer represented visually as blocks, you'll find you're building an increasingly detailed memory model of your code. Sighted people do this, too, but they tend to visualize in their mind. When you abandon this two dimensional representation, your non-visual mental map suffers no spatial limits. You'll be amazed how good your memory will get without the crutch of sight. Good luck. If you're a Mac user you can hit me up for tool recommendations. My email is my username at gmail dot com.


Your comment about proof reading and errors not sounding right reminds me of the "proper/best" way to learn Morse code for ham radio. Most everyone is familiar with the charts that show A=.- B=-... etc. Lots of beginners try to actually interpret the sounds in their ears and convert that to a letter and then convert a bunch of those to a word. Obviously, that is very processor intensive and has a high failure rate. The best way is to treat each audio pattern as a letter in a new language and skip the conversion process. Eventually, you recognize entire words just from the sounds.


A while back I worked for a team that had wired up Jenkins to a speaker in the office. Each type of event would trigger a different Zelda sound effect. Victory music for successful deployments, game over music for breaking the build, etc. Notably, dev server exceptions were connected to sword clashing sounds.

It led to debugging situations like this:

"Okay, so that's two clanks when we click on this button, but if we do it on this other page it's only one clank. Hm."

Turns out, there are some bugs that are easier to detect this way. Looking at timestamps doesn't really give a "sense" of whether two events are in a tight causal link. With sound, you can immediately "hear" that two adverse events are occuring at identical intervals every time. What's great is that the sound information doesn't take up any additional attention. It just fades into the background if you don't need it. When there is a pattern, it stands out.


At a previous gig, we had a lot of systems named after animals, and I'd joked about wiring our alerting system up to emit the animal sound for the system that was paging, but I think I'm actually going to do that at my next gig. Outside the land of software, engineers debug systems using sound all the time - I think we're underutilizing our senses and our pattern recognition.


I always wonder at the madness of elevator design. Most of them "ping" as they pass a floor for accessibility. But that means the blind rider must first know the floor he is on, then count the pings to the right floor.

It should simply announce each floor as it passes. I know from experience that you can make quite comprehensible voice sounds by connecting a 5V I/O pin to a speaker, so there is no excuse. (All you do is digitize the speech, then turn the pin on and off as the wave crosses 0.)

Ditto for every device that comes with a chart saying which LED blink pattern means what status. Like the "fast blink" and "slow blink". The LED is blinking, is that "fast" or "slow"?


Any modestly "modern" elevator would have Braille indicators. And many also announce floors same way public transit does.


I've never been in an elevator that announced the floors with anything better than a ping. This is despite elevators being controlled by single board computers since around 1980.

The buttons have had braille on them for a long time.


Interesting. OTOH, while old builds here (electromechanical, no microcontrollers, <1990) lack even the basic ping, I haven't been in an elevator built in the new millenium which wasn't excessively chatty. (Central Europe mostly, but the same experience everywhere I went: "ground floor, main entrance, access to train station; mind the door; door is opening")


Interesting you say that, literally every lift in a public space that does any sound at all around here announces each floor with proper spoken speech, not just a ping.


I wonder if it is due to some ancient regulation mandating rings/pings that nobody bothered to update for current tech. When i lived in Warsaw (Poland) for a while, every single elevator i came across - including those in very old buildings - had voice recordings for announcing the current floor. I almost learned polish numbers from them :-P


In a lot of modern buildings in Hong Kong, lifts announce floors in three languages: English, Cantonese, and Mandarin.

I'd imagine in Macau, they might use four (those three plus Portuguese); the public transit stop announcements do, and quite frankly I'm not sure how they guarantee that they can complete the announcements before the next stop.


At that point, just use Morse code. For small buildings, the cuckoo clock method works too.


the main reason why elevators in Hong Kong do not is that most of them are high speed high-rise elevators and the numbers just whip by. It would probably be very frantic sounding to use Morse if you went from ground to 50 in less than a minute.

Instead they only announces floors that are stopped on, so this works fine.

One benefit of using spoken vernaculars rather than Morse is that Morse has to be taught; Hong Kong historically draws large portions of its population from migrants originating from all over China, where until recently there was wide variance in schooling and literacy rates.


I know of at least a few in London office buildings that speak. Especially buildings with large banks of elevators that tends to have "fancier" control systems that manage all of the elevators.


Canadian here - even my condo building elevator announces floor numbers with a (pretty well-digitized) voice. And believe me, my building is no marvel of modernity.



I haven't seen taking ones a ton, outside of hotels and fairly new conference centers and the like.


Might be an EU thing. Pretty sure there's a regulation that says new lifts need to announce floors even cheap hoists that only go one floor do it.


A fun demonstration of getting comprehensible sound via one bit sampling is that you can use a tight loop on a C64 to read from any number of input pins (such as the tape input) to sample sound, and play it back either by writing to any number of outputs and connect it to a speaker, or indirectly by using the signal to toggle the sound chip volume up and down.

So not only can you get comprehensible voices from a 5V I/O pin to a speaker, pretty much any CPU newer than the mid 70's will be powerful enough to drive it.

So I agree - there's not really any excuse other than that people haven't thought about the UI.


my c64 is sitting right next to me, and i think i'll play with this after work. thanks!


Careful not to burn anything out... Though the C64 is pretty indestructible - I connected all kinds of things to the IO lines of it back in the day... And a few inadvisable cases of soldering things straight to the user port pins (I'm amazed I never destroyed any machines.... that way)

A tip is to turn off the screen during sampling and playback, as otherwise the video chip steals a lot of memory cycles.

You might then also be interested in this far more impressive playback with the C64:

https://brokenbytes.blogspot.com/2018/03/a-48khz-digital-mus...


The one edge case is that the bells work for all blind folks, not just those who understand the language spoken by and announcement. Of course, a blond person in that case could just count announcements …


Hey, let's leave hair color out of this.


Argh, wish I'd got your comment in time. I meant, of course, a blind person!


> Of course, a blond person in that case could just count announcements …

Exactly, he's not worse off.

Besides, I've traveled in foreign countries where I literally do not know a single word in that language. You start picking up words by association almost immediately.


The elevators in my building only make one announcement per stop.


The "ping for accessibility" is a holdover from ancient times (think 1910s), when there was an actual bell that was rung by the elevator's passage (one ding up, two dings down, IIRC). Current technology has (and uses) far better capabilities, either by (common) samples or by (uncommon) speech synthesis: "ding" "Floor 6." "Doors opening..."


I have noticed that some street walk/don't walk signs now have a voice rather than just a tone. It sure took a long time. Though the voice would be improved by saying "walk east" rather than just "walk".

But still, hardly any use a voice.


-At a former employer, I had coded some embedded devices to spit out their bus address and current error code in morse code using the buzzer in the keypad. Saved me lots of time during initial debugging of new installations, then I'd just disable the buzzer before heading for home.

(At least) on one occasion I forgot; years later a puzzled chief engineer calls me and says that his first officer had told him that E209 had a pending IGBT failure while they were discussing issues with the equipment we'd delivered over the din of the buzzers. Would that hypothesis have any possible merit?


It might depend on the person, but sound can be a powerful way to perceive the world. When I was living in another country and learning the language, I found that the difference between a lot of bad or mediocre language learners and those that were really good, was that those that were good eventually stopped thinking in terms of grammar rules and dictionary words and started remembering phrases by how they sounded.

If you say something that is grammatically incorrect in your own language, most native speakers will tell you it just "sounds" wrong. They can't even explain the rules of why it is wrong, but they know it sounds wrong.



I used to debug embedded systems by hooking an I/O pin to the speaker, then would toggle the pin level at various points in the program.

You could tell where it went wrong by the "song" after a while. Sorta like tuning your car by ear.


This reminds me of when I was helping a friend debug his home-built computer. Couldn't get a POST message out of it indicating why it wouldn't boot. After a while of searching, discovered it was because there wasn't a speaker connected to the non-existent pins for the speaker (there were no pins, just the soldering points for them). Once discovering that, used a multimeter to read the POST error - no GPU detected. Fun times...


> multimeter

Once I was introduced to an oscilloscope, I much preferred that. I bought a nice one off of ebay for about $60, and another peach of a scope from the pawn shop for $40.


I'm reminded of this set up at a fusion reactor: https://www.youtube.com/watch?v=IrtGp8hv-0Y

They put microphones in the reactor and monitors for them in the control room, so they can hear if something sounds wrong.


That's super interesting. I would imagine another benefit of that system is pattern detection. It would be much easier to notice a "beat" happening than with log messages, where you need to put things on a timeseries representation to see those patterns show up.


That is technically termed sonification, FYI.


Similar experience for when I taught myself the Dvorak layout years ago. My fingers learned the common motion for whole words first. Actually, I'm not sure I could draw the layout of the qwerty keyboard I'm typing on right now from memory either.


Just for a laugh, try changing your phone keyboard to Dvorak. It seemed like a good idea but felt alien to see the keys I naturally use. Swiping was just ridiculous.

Obviously it’s just practice but I don’t mind using qwerty on my phone. It keeps it vaguely present in my memory.


I've been typing Dvorak on my computer since middle school and so I thought it'd naturally be a good idea to switch my phone... turns out not so much. While having all the common letters on the home row is good for regular typing, it is not good for phone typing which is heavily dependent on autocorrect. If I miss a letter by one key it's still likely for it to be a valid word because common letters are clustered and autocorrect misinterprets. For example, I might type "i" and then "t", "n", and "s" are all adjacent meaning if I meant to type "is" and I miss by one key and type "in" then it is not corrected. Other annoying ones are "by" and "my" because the b and m are next to each other.

Sadly, I'm in a bit too deep to go back to Qwerty on my phone (I'd be very slow for awhile). I do still enjoy Dvorak on the computer and while I don't think I'm a faster typist than I would be with Qwerty, I rarely (pretty much never) experience finger strain when typing for long periods.


Supposedly the entire reason Qwerty was designed was to prevent typewriters from jamming by making the keys as disambiguous as possible, which is now ironically ideal on tiny screens.


Indeed. All the reasons Qwerty are bad for 10 finger typing are specifically the reasons it's fairly good for tiny touchscreen typing!


I switched to Dvorak in the early 2000's and typing qwerty on a keyboard is totally alien to me now.

However, any attempt at using Dvorak on a touch screen keyboard has failed as I can't rely on my muscle memory.

I found that for touch screen input I preferred using qwerty since I have a better feel about where keys are to hunt and peck. Funnily when typing dvorak, my fingers knows where the keys are but I don't have a good conscious mental map of the layout.

Maybe it's due to the fact that you have to touch type dvorak without any visual help since most users use regular qwerty layout keyboards.


Can vouch for colemak being hard to type correctly with on a phone. Normally with qwerty I felt I didn't rely so much on autocorrect, but with Colemak I make certain mistakes quite frequently. Especially given eio are all next to each other a lot of times when I want to type o I type "i" instead. I think it's gotten better in recent months but certain combinations on Colemak just seem to be very sub optimal for a phone. Especially the placing of the vowels.

I'm used to it now and it doesn't quite bother me enough to relearn qwerty on mobile. Kinda stuck in this weird limbo of relying on autocorrect for every 5th word. Hahaha.


Funny story. I started learning Colemak must have been close to a year ago now. I switched my home computer and phone to Colemak, but kept my work computer on Qwerty as I wasn't quite ready to suffer the initial productivity hit even though I knew I needed to do it in order to really breakthrough. In the end I couldn't commit, so I switched everything back. About 6 months later my wife received a text while she was driving and asked me to reply to it. I picked up her phone and fumbled through typing out a reply. Something felt off. I opened up my phone to compare. I had never switched my phone back to Qwerty and my brain had completely adjusted. So much so that I didn't even realize my keyboard had been Colemak for the last 6 months.

It's still Colemak to this day.


I did this many years ago, and am now terribly slow at typing on a phone with a qwerty keyboard, and still cannot use a Dvorak keyboard on my computer...


It is painful to use Windows on my surface as a tablet, since I want the physical keyboard layout to be dvorak and the osk layout to be qwerty (because I've never really seen the dvorak layout). Microsoft obviously has no interest in making it accessible to dvorak users.

I think the dvorak equivalent of an osk is something more like MessageEase. If they would bother to update their Windows offering to something a little more recent (high dpi instead of scaling etc) it would be nice tho


Amusing story - when I was in uni I decided to learn dvorak and since I'd just got a shiny new smartphone I set the keyboard in it to dvorak too.

I was never able to get over my qwerty touchtyping skills and gave up very quickly but I left my smartphone keyboard that way as I thought people's reactions when they tried to use it was funny.

Six years later, I bought a kinesis advantage and discovered to my horror that my touchtyping skills were useless with the alien key layout and I couldn't remember where any of the keys were because I'd not looked at them for so long.

However - the kinesis had a "dvorak" button and after having used it on my phone for so long I found it easier to learn that then re-learn qwerty. 4 years later and I get hopelessly lost every time I try to use qwerty.


Wow doing that would make so little sense. The benefit of dvorak is less finger movement when touch typing. It would be interesting to optimize the keyboard layout on a phone for less finger movement while swiping, but that probably looks a lot different than Dvorak.


For swiping it’s the opposite: you need to move your finger more to clearly disambiguate which letters you want to “select.” The hardest words are things like “as” and “tool” where there’s letters close by to miss-swipe over. At least in my experience.


I remember seeing a finger movement optimised keyboard a few years back, it was incredibly alien and based on gestures


You're probably talking about MessageEase.

I've been using it for about 7 months now and it's been great. I especially like being able to type complex passwords without looking at my phone's keyboard. (like when joining a new wifi network, for example)


I never understood why Dvorak is a thing.

I and others like me can touch type with Qwerty just fine. I can comfortably do 70-90 wpm, which is above my speed of thought. I don't suffer from RSI either and I've been touch typing for 20 years.

Also the Qwerty layout is more international. In non-English languages the Dvorak assumptions break down, the layout being optimized for English. And good luck finding Dvorak versions for other languages. Whereas all localized keyboard layouts that I've seen are based on Qwerty (for languages using Latin chars obviously). So for people communicating in more than English, like myself, anything non-Qwerty would be a pain.

So the biggest problem with Dvorak is that it is non-standard and it ain't worth it imo. I have issues with using other workstations, of friends and colleagues, just by having reconfigured Caps Lock into another Ctrl.


I speak 3 languages on Dvorak-alt-intl layout which I have optimized for coding: https://blog.yourlabs.org/posts/2017-12-22-dvorak-intl-code-...

Dvorak fixed my RSI problems, it is not possible that Qwerty have the same physical impact than Dvorak because physically a Qwerty typer has to "walk" a lot more distance with their fingers to type the same thing than a Dvorak typer that has all voyels on the home row.

You can spot a Dvorak user by just watching them type: their fingers walk a lot less distance than other users. It's a trick that I learned from other Dvorak users who had spotted me.

I can still touch type with Qwerty, but my hands hurt after 5 minutes.

I type around 145 wpm, including in the 3 languages I speak (all ASCII based though) and languages or frameworks that I know by hearth.

To make it easy for colleges and friends, I just have 2 aliases:

alias aoeu="setxkbmap fr"

alias qsdf="setxkbmap -I ~/.xkb code -print | xkbcomp -I$HOME/.xkb - $DISPLAY"

As you can see, the four letters of the how row, in dvorak switches to french, in french switches to dvorak. Of course it could be more complicated with more layouts if I had to support them but then I guess a systray icon would work.

Anyway, also using Kinesis Advantage 2 LF, absolutely love it, and plenty of other "non-standard" stuff like the OP says, such as a tiling window manager.

Highly recommend learning or making a better layout than Qwerty because the drawbacks are largely country balanced by the benefits IMHO.

It's not about whatever someone thinks is "standard" it's about how you want to touch the machine for the rest of your life.


> "I can still touch type with Qwerty, but my hands hurt after 5 minutes."

That's not a good metric. In your case, you might be right, but for the general public it's like when picking up any new exercise or sport, like running or biking or anything really, when untrained you end up for the first few days with sore ligaments, joints and muscles. You get used to it.

My hands never hurt, even though I love writing long texts, long emails, long code comments, etc. I can also do 120 wpm but I don't feel the need to do it because it's stressful, not necessarily on the hands, but on my thought process. Speed isn't an issue.

If Dvorak solved your RSI, that's awesome, I can understand why you switched, however I don't think this is a causality issue, meaning that I don't think it was Qwerty that caused RSI in the first place. If Dvorak is a good strategy for you to manage the pain, then great, but it's pain management, not a cure.

I also used to configure my machines a lot, going into a yak shaving rabbit hole every time it wasn't functioning properly. I was on Linux back then, but I gave it up and nowadays I'm much happier. The only yak shaving sink I still have is Emacs and I'll get rid of it as soon as I find something better.


Indeed it's just my personal experience, but there's a fact that you seem to have missed: the distance that fingers must walk to type the same text is higher in qwerty than in dvorak, that's the whole point of dvorak and others such as bepo. So, typing in Qwerty is more physical work than typing Dvorak. There are ways to calculate that pretty accurately.

I don't have proof that less finger distance means less hand work means less pain, so maybe you're right, maybe it's just a coincidence.

Or maybe less finger distance means less work which means less tension accumulated in the muscles and then indeed less pain.

Nonetheless, I changed "fixed my RSI" by "fixed my RSI problems" thanks to your feedback, which should clear out the misunderstanding.


It's also worth mentioning that some of the Qwerty pain can be psychosomatic in origin. Even for me, who hasn't suffered RSI, it was pretty clear that I didn't feel the exertion of Qwerty until I had gotten used to something better (Colemak, in my case.) Once I knew how nice it can feel to type on a keyboard, anything less nice starts mentally hurting – and I'm sure, for some people, also physically.


> The only yak shaving sink I still have is Emacs and I'll get rid of it as soon as I find something better.

I'm a long time Vim user, myself (and no, not looking to spark a holy war here). One observation I have about my own habits is that I subconsciously slap the ESC key every time I'm done with something, even if I'm in a non-modal editor like Notepad or SQL Management studio. It's just become so ingrained in me that I can't stop doing it. Mostly because there's no negative feedback loop associated with doing it.


Same here, it's a pain when working with spreadsheets where this clears the cell, drives me nuts. Also I always turn on caps lock when using someone else's computer since I rebind esc to caps.


I fee like UI/UX needs to be completely redesigned and dvorak isn't even half (or any) of the battle. Single-key copy/paste, undo/redo, keyboard driven UIs, etc etc. I say this as I type on dvorak and while I love it, I also regret it due to interoperability issues and breaking of common keyboard shortcuts (I miss Ctrl+CVX).

But I just aliased `us` and `dv` to setxkbmap shortcuts, so thanks for that!


That's what Colemak is for. IMO no one should consider learning Dvorak these days.


Dvorak still has the alternating hands rhythm that Colemak lacks, being more built on one-handed finger rolls than Dvorak. Most people I speak to prefer the finger rolls of Colemak, but some people (like me) prefer the alternating hands rhythm of Dvorak.

(That said, I still use Colemak because it happens to work better for my native tongue. If I only wrote English, I would have been a Dvorak user.)


What I'm talking about is a UI/UX problem that cannot be solved by key layout, I even experimented with a Ergodox and ended up in keybind hell really quickly because supporting one application others very quickly.


I switched to Dvorak after I got RSI about 10 years ago. It was so bad that I couldn't type anymore. So I taught myself Dvorak because I did some research and it seemed it could have been helpful. I was pretty desperate to solve my problem, and it seemed Dvorak could help since your hands travel less than Qwerty.

Fortunately, I haven't had RSI ever since I switched to Dvorak.

Switching to Dvorak is quite tedious, and I wouldn't recommend it unless you have a good reason to do so.

Regarding internationalization, I write in English, Spanish and Italian in Dvorak with no problem. It really doesn't any difference to me as compared to Qwerty.


> And good luck finding Dvorak versions for other languages.

I definitely recommend Neo2 for German users. Ctrl + a,x,v and c still with the left hand. Letter frequency optimised for German first, English second. All programming symbols easy accessible, additional layers for Greek symbols and math notation. Also, users of non-US layouts have much more to "gain" by switching from query-like layouts: Due to umlauts or accents, brackets tend to be way harder to reach.


You reminds me about learning a foreign language you need to think in the foreign language without the translating part.


Any tips on how to start then? I used to know all the letters in Morse code but it was basically just learning the chart as you mentioned. I've since forgotten most of the alphabet.



I'll have to try that. I've tried some acoustical training before, but not starting at full speed like Koch's method.


Once you learn all the letters, as you walk around, pick various items you encounter, and say their name in Morse code. A car: "dah-dit-dah-dit. Dit-dah. Dit-darrrr-dit."

For fun, pronounce "R" like a pirate would.


Thanks for this one. I haven't heard this suggestion before. I'll keep that in mind if I ever try to pick it up again :)


If you talk to old school CW operators, they will all tell you that you need to go beyond the letters and into the words and sentences like you skim over a paragraph. In fact, if you operate morse too slow, many of these old schoolers don't want to talk to you as you break the "flow". Which is the exact thing you're wanting to achieve in morse fluency.


It's just like a language. If you want to read morse code look at that chart, but if you want to hear it quickly you'll need to do audio flashcards. I n Anki it's pretty easy to add cards that have no text but sound which should be all you need.


I wonder if the same thing is possible with sight reading music.


> you'll be bumping the speech rate up to 1.75-2X normal speech

Try 5-6x. I've met some blind people who listen to audio so fast, it sounds like noise to me. As someone who isn't blind, I regularly listen to lectures and audiobooks on 1.75-2.5x (depending on the base rate) without issues, but 6x is just something else entirely.

> Typos will be easily spotted as they just won't "sound right".

Really good way to describe this. It's almost like listening to an audio fingerprint which magically indexes the answer in your brain. My blind friend was doing law, but I'm sure she did something similar to help her map the dependencies between laws.


> Try 5-6x. I've met some blind people who listen to audio so fast, it sounds like noise to me. As someone who isn't blind, I regularly listen to lectures and audiobooks on 1.75-2.5x (depending on the base rate) without issues, but 6x is just something else entirely.

Yes, indeed. I was writing that original comment for someone just getting started. What ends up happening is no matter how fast the speech rate is, you'll eventually get used to it and your brain will grow impatient. So you'll bump the rate up another 0.25 and rinse and repeat. Pretty soon it's incomprehensible to anyone but you.

I'm pretty sure anyone can get used to 2X+ without much effort. After a while you'll wonder how you ever listened to speech at a normal rate. Be careful, though, as I've found myself grow impatient listening to people in real life and can trace it back to my high-speed listening habits.


When I listen to audiobooks or podcasts, I can easily bump up the speed by 25% until I hit 2x. Faster, depending on the reader and the type of book. If it's a good non-fiction reader like Edward Herrmann, I can go faster. If it's a fiction book with an expressive reader using accents and emoting, I sometimes can't get beyond 1.5.


Some content is even improved by being sped up. Doubling the speed made me enjoy cartalk again. Having to constantly wait through multiple seconds of hacking laughter was breaking the flow.


Does it affect listening to music? Does it also trigger the impatience?


I'm not blind but I do like to listen to podcasts, YouTube, etc at 1.5-3x and I've certainly caught myself getting annoyed at how long it takes to listen to a new album I want to check out, but I've never tried to speed it up, I just remember to relax and enjoy it.


> My blind friend was doing law, but I'm sure she did something similar to help her map the dependencies between laws.

Nobody calls them dependencies, but yes, you would be able to hear if `Re Estates Croft, deceased [2018] NSWSC 1303` were wrong.


i'm sighted and super adhd. i run my kindle at 4x. its super annoying because they speed the words but not the spaces. half a god damn sentence could go by during their spaces


> Also, because code is no longer represented visually as blocks, you'll find you're building an increasingly detailed memory model of your code.

does refactoring by other team members have an outsized negative effect on this?


Relatedly, are there challenges or benefits to reading diffs in this model?


Since I work on the D programming language design, I'm interested in any advice you can give on what would make a programming language design more accessible to blind programmers.


> You'll be amazed how good your memory will get without the crutch of sight

That's all really interesting. I personally have a problem with my thoughts in that I need to "sound out" or read them aloud in my head. This sometimes really slows me down, I mull over a thought until it sounds right. Especially when I'm tired I can get stuck in a loop. And it applies when reading too, I can force myself to read faster but usually I slow down and hear every word out of habit.

I know some people don't have this inner voice. I wonder if it means they can think more freely.


I can’t speak to physical blindness, but I have aphantasia[0] which basically means I cannot voluntarily invoke my minds eye. Up until I found out about it last year, I thought that “visualizing” things was a metaphor for thinking, as I didn’t realize people could actually see things without looking at them.

There are varying degrees of aphantasia. For context, I am completely unable to “see” anything in my mind. I do not dream either. If I close my eyes and think of something, nothing happens visually, though I can more easily remember things about whatever it is. I would describe it more as complete emptiness, instead of darkness (as darkness implies there is something there at all). I have been like this since I was born.

One thing that stood out to me in GPs comment was that programmers tend to visualize code to reason about it. I asked a bunch of my friends after finding out my minds eye is blind and they all said they do it at least sometimes, some of them do it constantly.

I obviously don’t, but I can say that I have a very good memory (in general, but also for code), and often find myself capable of context switching between related bits of code faster than many of my peers. I rarely reach for a diagram for explanation, however I do use them very regularly for documentation and explaining designs because I find other people seem to be able to process that format better.

I don’t know if my aphantasia contributes to any of this, these are just things I’ve noticed I’m good at. One thing I am terrible at, however, is design.

[0] https://en.m.wikipedia.org/wiki/Aphantasia


> For context, I am completely unable to “see” anything in my mind. I do not dream either. If I close my eyes and think of something, nothing happens visually

Correct me if I'm wrong but I suspect a lot of self-diagnosed cases of aphantasia are due to the ambiguity of language (specifically, English)... Qualia lost in translation.

I suspect nobody really sees something visually when thinking/imagining something, in a sober waking state (barring maybe a few rare exceptions or psychotic disorders). If you would see something superimposed on your visual field, with closed eyes or not, that would be called hallucinating.

For me, "seeing it in my mind's eye" is a different kind of "seeing": there is no visual input.

Example: imagine a tree. Where are the branches? Where are the roots? Where are the leaves? What color do the leaves have? Etc... You can answer these questions, right? The place where you find the answers is where the mind's eye is. I've yet to meet someone who actually sees the tree with the same visual clarity as if looking at an actual tree.

Dreaming is somewhere in between for me: it's usually visual, but not of the same visual quality as waking life, except when lucid. A lot of people don't remember their dreams, but that doesn't mean they don't have them. Ask yourself right after waking up in REM sleep (don't even open your eyes if you can): what just happened? What were you thinking?


> I suspect nobody really sees something visually when thinking/imagining something,

I suspect that you may have some degree of aphantasia.

Visualizing something encumbers the same visual processing as sight does. Personally I have a hard time visualizing something and looking at something at the same time. I can only pay close attention to inside or outside, not both simultaneously. (Casual attention, like half-listening, is possible otherwise I'd be a terrible driver.)

To think of it another way: when you say "I've yet to meet someone who actually sees the tree with the same visual clarity as if looking at an actual tree," maybe you should ask some more. I can look at things, close my eyes, and continue studying them up to the limits of my memory and original attention.

Aphantasia seems to come in degrees, not a binary "have it"/"don't have it."


Perhaps... what I mean is this: close your eyes and visualize a tree. Do you literally see the tree in the darkness of your closed eyelids in front of you, visually? Even just vaguely? If not, then imho that's not really visual. I "see" the tree, but it's a different kind of seeing. Perhaps that's a degree of aphantasia, but how would I know? ;)


I do, but I haven't practised the skill much lately so it's not as vivid as it used to be.

It's more like a second virtual desktop than superimposed; it takes up the same coordinates, but there's a clear distinction between "real" and "imagined" objects (unless I'm particularly tired, but that's closer to "hallucinating" than "visualising").

I used to be able to visualise things in positions relative to real objects, but at the moment I can only manage floating in the air, such that focusing on the position of imagined objects de-focuses the background, either relative to my field of view (so it moves when I move my head) or in a defined position in space (so it doesn't move when I move my head – though obviously this is limited by my ability to instinctively judge positions and distances, so it often doesn't quite work right when things are moving in complex ways, or I'm moving fast).


That's blowing my mind right now. First I thought people who can visualize to the point of actually seeing what they visualize were pretty rare. For me what I visualize takes place in a mental space: it's a clear mental image, not a literal visual image.

Have you always been able to do this, because you mention practising this skill? Also, does this only happen after focussing on something, or does it happen spontaneously when thinking? I imagine it would be pretty distracting if someone says for example, "don't think of scary thing X", and then the scary thing manifests visually for you because you are thinking about it.


> I imagine it would be pretty distracting if someone says for example, "don't think of scary thing X", and then the scary thing manifests visually for you because you are thinking about it.

That sounds about right. For me, "don't think of a pink elephant" results in the image of a pink elephant, though if I'm not concentrating on its location it's just… somewhere.

> Have you always been able to do this, because you mention practising this skill?

I think I've always been able to do it – but I haven't been doing it as much since I stopped playing with toys, going outside and generally doing activities where such a thing would be useful, so it's less instinctive.


Uh yeah, same here. Like, I can remember what my daughter looks like as she sleeps in her crib, but visually what I see (with my eyes closed) is just blackness. I've always thought of it as something akin to a prerendering buffer or virtual display context in my mind. What the parent poster is saying makes me think that for them its more like experiencing an actual display, which seems like it would get really confusing layered onto the visual input from their eyes.

The only time I've ever experienced anything like that I was trying psychedelic mushrooms for the one (and so far, only) time, and so it was literally a hallucination.


That's an excellent analogy! The mental space is exactly like a prerendering buffer. When asleep and dreaming, there is no overriding visual input, so the stuff in the mental space gets rendered. And I guess some people can access this even when sensory input is present. That must be wild.


I had many conversations in the weeks following my original discovery, and this comment thread reflects them perfectly.

Me: There's no way everyone actually sees things and I made it 30 years without knowing.

Everyone else: Yeah I see things.

Me: but, like really see or just remember features?

Everyone else: <some variation of quality, but it was always described visually and everyone agreed it was basically like seeing>

I put "see" in quotes in my original comment simply because that is how everyone I spoke to described it; that it's clearly not physical vision, but it's also clearly a visual experience. I have absolutely zero visual experience when I close my eyes, and I never have, it is simply emptiness.

You seem to describe dreaming as if you experience that visually too, while not being able to do it while awake. It's worth noting that this is not uncommon for people with aphantasia, which is why it is the inability to _voluntarily_ invoke your minds eye. Many people can still dream visually, however I do not. I go to sleep and then simply wakeup with nothing but emptiness in between (not an experience of emptiness as time passes, but more like a time skip).


Yep, imagining/visualizing something goes without an actual visual experience for me. I've always assumed the same thing applies to most people, but I guess I do have some degree of aphantasia.

I can imagine something, but it's like rendering a layer with the opacity brought all the way down to 0, but my "mental computer" knows it's there and can tell its shape, color, all its features & details in 3D space... without seeing the actual visual image.

Dreaming on the other hand is usually multi-sensory and especially visual for me. Much like being awake or in a VR environment, but with internally generated input.

Have you spend time really consciously investigating your sleep? To the point of interrupting it with alarms etc..? I'm someone who's very interested in my dreams: for years, every night before sleep I wonder what I will dream, sometimes I get lucid during the dream, and when I awake my first instinct is to recall my dreams. I think this is why I dream so much. I wonder if the same thing applies to visualization and whether or not aphantasia is "curable" through consciously exercising visualization.


When I visualize some thing I don't literally see it in my field of view but I can still 'see' it. It's similar to audition(hearing in your mind). Read something out loud in your mind or play a song in your head. Its the same thing you don't literally hear it but you do 'hear' it.


Yes but that was my point: I suspect (I can't be sure of course) that this is a limit of the English language. We call it "seeing", but it's not. If it would be real seeing, you would visually see the thing you are visualizing in your field of view, as if it were literally there (like in a dream).


Hehe, perhaps no and this is the moment when you figure out that you have mild aphantasia. When people say they "count sheep jumping over a fence" it's not a metaphor.


That's kind of what I've been trying to figure out: when people say they count sheep, do they close their eyes and literally see sheep jumping over a fence as if watching a movie or wearing a VR headset? Even in low resolution... That would be pretty impressive... why would we still need movies and games then? ;)

Of course I can imagine sheep jumping over a fence, even in vivid detail, but this imagining is not really seeing visually like in real life, not even like in a dream. As I said, I would describe seeing things that aren't there (with closed eyelids or not) as hallucinating. Perhaps not being able to do that at will is aphantasia, I don't know, but it would blow my mind if it were


Generally when I’m idly visualising something it is ‘in my mind’s eye’ as you put it. But if I’m really focused and close my eyes then everything else seems to disappear and I’m ‘really’ seeing it, not on the inside of my eyelids but it encompasses my entire visual perception if that makes sense

More rarely I also get it with sounds and then it definitely does feel that the sound is originating externally to my mind

You’re right that these experiences are more like hallucinations but I think they are points on a spectrum rather than separate things


Aphantasia is a condition I have never heard about. Your experience and the description on Wikipedia lets me think I was born like this as well.

I had many arguments with my wife about how I did not have visual dreams and did not see anything when I close my eyes. When she says she sees stuff when she closes her eyes, I always assumed it was some sort of metaphor and it was usual for human beings not to see anything when their eyes are closed.

I think when I "visualize" something, not limited to math and code, I do not see anything per se. Mostly I think my thoughts are directed graphs whose nodes are concepts I have in memory.

May I ask how did you get a diagnosis? And did you learn anything of use thanks to this diagnosis?


Not OP but...

There isn't a scientific diagnosis, it's more of a self observation kind of thing right now. The reason is that there has been very little research into it so far. You may notice that the wikipedia article is extremely short and most of the references are news articles, opinion pieces, and blog entries. Personally, I've found that the realization has offered slightly new ways of approaching life. It's fun to explain to people how I think and how that may differ from them. Nothing truly changed except perspective.

For reference, I also think mostly in directed graphs. I'm also able to roughly draw a shape in my head, but its very quickly erased and is hyperlocal, like trying to draw in the air with a sparkler.

There is a subreddit, r/aphantasia, but I've found that it's mostly a mix of curious gawkers and bitter people claiming that having it has somehow deprived them of essential life experiences.


Not OP but this felt like a simple test - https://twitter.com/backus/status/1091203973246111744

There's the Vividness of Visual Imagery Quiz but that's self-administered and is a questionnaire.


hmmm... this is interesting.

I think I must have this too. E.g. if I close my eyes and try to visualise the face of someone that I hang out with every day, I can't do it.

If I try really hard, the best I can do is a flash of bits of some photographs I remember. It also starts to give me a headache. I think though, if I want to remember how something physically looks it helps to have my eyes open. Even then it's more like flashes than being able to actually see something.

Mostly when I "visualise" something e.g. the house where I live, I think in terms of how things are spatially connected to one another. It's more like a scene from the matrix - where what I'm remember are more the concepts of things and where they are spatially, rather than what things actually look like. This is pretty much also what happens when I'm coding - thinking in terms of concepts and how things are connected together spatially.


[flagged]


Aphantasia is real and there have been discussions on this topic on HN. Also covered in the media for example https://www.bbc.com/news/health-47830256


I find myself going quieter when I read faster. I do take pauses to think through what I’ve just read. You may want to try separating reading and thinking, and see if that helps you no longer need to think aloud reach word.


FYI that process is called "subvocalization" - I'm only passingly familiar, but if you search that term there's lots of literature about it.


Yes that's the one, I forgot what it was called. Always found this part interesting (from the wiki):

"This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading."


Try gently pressing your finger against your larynx when you read in your head - you'll likely find you read faster - this works for me, although I tend not to bother.


Interesting tip, I will try it.

The worst thing about the inner voice is it's not just when reading, it won't be quiet at bed time. And I have a tendency to complete thoughts as words. Even if I know where it is leading, and what the conclusion is, it feels like I have to see it through and phrase it correctly in my head. I feel very uncomfortable leaving it hanging.


Exactly. Sometimes I even feel compelled to complete the sentence more than once (as if I wasn't paying attention enough the first time, so I feel the need to say it again). I think there is a subtle difference between that and "reading without production" (when I try it, it feels more like skimming text; I have to really try to pay attention). To me, reading/inner-speaking feels like engaging with the text; to only be able to "see" words would be lifeless (I'm assuming that some/many/most people can do both).


I also rely on the "inner voice" for reading and writing. But I can skip that when programming. I think it's related to how you learn that language in the early stage.


Could you expand on how you skip it for programming?


When I am designing the solution, I still have the inner voice. But when I'm implementing in code, I can do it almost "brain-lessly", even fine when listening to music.

I'm not sure how do I do that, but I guess because I do it a lot, so I can do it as familiar as walking?

Edit: Maybe because I didn't learn programming by listening to teacher, instead I learn from books and trials. I guess it's more like walking / playing sport, you don't need explicit inner voice to guide your body movement.


I have this inner voice when reading English, which is my second language. However, if I am reading stuffs in Chinese, I can skim through without the inner voice.


When I read Spanish (2nd language; not great at it) I sub-vocalize as I do in English. But if there is a long, complex word that I would have to laboriously "pronounce" (but I know exactly what it means ahead of time), I will treat it as a gestalt/symbol (so, it's like "cheating" on the vocalizing), so sub-vocalization isn't actually needed for comprehension, but I'm guessing is a parallel mental action/process (probably correlated with how we learn to read a language and personal characteristics). Maybe non-sub-vocalizing is really just suppressing this parallel processing workflow. (I'm wondering if people who "don't need to do this" could read backwords (accidental pun!) effectively.)


This is incredible! If you have a couple of minutes, would you mind sharing some of the Mac tool recommendations? I'm not blind, but as someone working on consumer apps, good examples really help!


because code is no longer represented visually as blocks, you'll find you're building an increasingly detailed memory model of your code. Sighted people do this, too, but they tend to visualize in their mind. When you abandon this two dimensional representation, your non-visual mental map suffers no spatial limits

That's a really interesting idea to me, because I have been trying to improve my memory and recall (mostly of things like telephone numbers) using the Major Memory system and the emphasis behind that and similar techniques seems to be constructing visual imagery which encodes other information. So, it sounds like you are saying that you have found other pathways/techniques that do explicitly do not do this.


It is entirely possible I'm still using my visual cortex. I was once sighted. When I'm in flow my wife tells me that my eyes spasm as though they're following some kind of pattern in space. I'm not aware I was doing this until she mentioned it. I can say, however, that I'm not visualizing the way I once did. In fact, because I don't have a visual input, conjuring up some kind of imaginary visualization would actually slow me down and be a cognitive burden.


Thank you for sharing your experience.

Would you say your ability to visualize has declined, or you’ve just favored other methods of thinking?


I tried that as well and it does not work for me. What works is association with some stories. For example, to remember a credit card number I associate each 2 digits with a year that gives an event and then the number is a sequence of events.

What also works for me to train memory is to learn phrases in a foreign language that I do not understand. Surprisingly combining those with text I do understand leads to better recall of the whole text. It is almost as understanding harms memory.


Hi, I am in a similar boat, I still have sufficient vision to code without a screen reader, but my concern has always been about the non-programming tasks that any modern developer is faced with on a daily basis.

In my job I typically spend 3-4 hours a day in the IDE, but but rest of the day is spent looking at production issues and servers, looking at RabbitMQ management UI, etc. The emphasis being on "looking".

I'm just curious as to how a screen reader would allow you to do those things, in a high pressure environment?


Do you use the screen reader for your code, too? How do you debug your code?


Can you please recommend software for this? which screen reader should I use? how should I configure it? how do you code with it?


Just curious, is it harder to read white space indentation code like python versus curly bracket code like JavaScript?


I don't really find one easier than the other, but I do use indentation in the same way sighted people do. I find it a lot easier than counting braces in my head.


does indentation sound like a pause in the speech?


NVDA just says "one tab" or "two tabs" before the line of text, but only when the indentation level changes.


> you'll be bumping the speech rate up to 1.75-2X normal speech. You'll be the only one who can understand your screen reader

Huh. I already listen to most podcasts and recorded presentations at 2x. Now you make me wonder if I could process information faster with a screen reader even though my sight works just fine...


I worked with a blind dev and his screen reader is incomprehensible by everyone but him. It had to be at like 5-6x. Guy was a wizard. Sure he had some disadvantages with vision but the guy was wicked fast and would pump out the most code


As someone who also does this, I find that the problem becomes not processing the information, but retaining it, and I haven't found a solution to that yet.


I doubt regular folks can reach 5x or even 2x with good retention. Perhaps if it's a podcast of people chatting about random things, it's possible, but any intellectually stimulating content you probably will have problems. Visually impaired people probably have an advantage on account of having their entire visual cortex freed up and possibly able to help with this (not sure if it does, neuroscience majors here?)


(As a sighted person) I watch all my videos or podcasts at 2x. Depending on how new or dense the content is, I might have to occasionally dial it back, but I think I do fairly well at 2x (usually, this is only the case if I'm distracted or the slides are going by too quickly as well). Often I can even up it to 3x.


Maybe retention is worse, but I'm still better off. Higher speeds are more engaging and enjoyable to me, as bad speakers and long pauses matter less. Therefore I benefit from watching more talks and lectures. For better retention it is preferable to occasionally re-watch good talks for spaced repetition.

The Video Speed Controller extension is indispensable part of my browsing experience, mostly for keyboard shortcuts to speed up/down and << >> videos.


I used to listen/watch everything at 1.5x-2x speed but found a similar result. Now I only increase the speed if it's information I only need to process once (e.g the news) but not remember later (lectures, tutorials etc).


I've been doing that lately (1.5x-2x), but taking notes, and making sure to work on any exercises provided after the lecture. I need notes for retention even at 1x, so it doesn't feel like extra work.


The problem with retaining is independent of processing information speed.


I've heard screen readers going way faster (I don't know quantitatively) than what it sounds like when I listen to videos at 2x.

I also think it's important that the screen readers use a "old school" synthesized voice, instead of a voice that sound natural, because each of the phonemes is articulated distinctly.


I do that, too.

Especially presentations. For most movies, I am only comfortable with 1.25x. In presentations people talk extra slowly. It is a classic advice when learning to give presentations, talk slower. After getting used to 2X speed, it becomes extremely frustrating to listen to a live presentation.

Although even 2x presentations still feels slower than my reading speed as kid. But I cannot speed read anymore since I need glasses


I've had the same experience with audio books. I find it's easy with entertainment. In normal speech there is a lot of redundancy. You can miss entire sentences and still follow. I don't this would be the case with a screen reader.


You got me curious.

I just put some news interview on YouTube at 2x.

It is fun how even the accents are easily recognizable.

Anyway, this is in Spanish. My guess by reading the comments is Spanish is a much easier language to listen at a faster speed.


If you want to try it, older tts voices such as eloquence are intelligible at much higher speeds than the modern natural sounding voices.


[flagged]


Personal attacks will get you banned here, so please don't do that, even if another comment rubs you the wrong way.

https://news.ycombinator.com/newsguidelines.html


Asking for personal decency toward another human being is a banning offense here?


"Thanks for taking the time to share how superhuman you are" is not "asking for personal decency".

Please don't post in the flamewar style to HN. We're here for curious conversation, which doesn't go along with the style of argument in which people cast each other's comments in the worst possible light. If you review the site guidelines you'll see that many of them guard against that argument style. That's no accident, because it's so common in online discussion generally, indeed has become the default. It takes conscious work to have a place that doesn't fall into it. That's what we're trying for.

Note this one, for example: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." If you'd review https://news.ycombinator.com/newsguidelines.html and take the intended spirit to heart, we'd be grateful.


The person took the time to mention their ability to listen and comprehend two times faster than presumably the average. They didn't not mention the original poster in any sense of compassion, ignoring them.

What is the strongest possible interpretation of what this person said? That which is not a fair understanding of the criticism of my post?


The plausible interpretation is that the commenter got excited about screen readers because they unexpectedly connected to something he was already interested in, and it made him curious. Not only is that a perfectly decent thing, it's an example of the curious conversation that HN exists for.

Meanwhile, snarking and putting down other users, which unfortunately are what you did, are examples of what HN very much does not exist for.

It's really easy to interpret other people's internet comments in ways that add things that weren't necessarily there. We have only tiny blobs of text to go by. We don't know each other, and we don't even have the cues that come from voice, body language, and the like. As I read the GP comment, it seems to me that it was you who introduced the notions that it was somehow about the commenter puffing himself up, or that he was somehow lacking in compassion towards the original poster.

In a thread of 200 comments, there will inevitably be rivulets of conversation that aren't about the main theme. Such tangents are only bad if they're somehow generic and predictable. This one was interesting, still very much in the orbit of the main topic, and I really don't think there was any meanness in it. I'm pretty trained to pattern-match meanness after years of trying to get people to be kind to each other on HN, and I'm afraid it was your comment that was the mean one.


Without meaning it as a personal attack, do you have any reason to think that 6 years of trying to detect meanness has made you any more accurate at it than you were before, or than anyone else is?

When you say "trained to pattern match it", is there any feedback to change your models? As moderator-by-fiat it's your decision what comments are mean, which is potentially a self reinforcing loop.


Here's some feedback: under Dang's eye, HN has become a paragon of online discussion. I say this as someone who has occasionally been called out.


He didn't say that he can listen two times faster than the average person. He said that he can listen to podcasts at twice their recorded speed. There is nothing superhuman about this. Many people do so. People naturally speak slower than their listeners can understand, and practiced public speakers are trained to speak even slower than that.


Listening to speech at double speed is really not that noteworthy, as anyone can do it with a bit of practice. I didn't construe it as bragging, just as outlining the thought process which led them to think that it might be worth them trying a screen reader themselves.


"Now I will have less distraction." - Leonard Euler upon losing his right eye vision


"When you abandon this two dimensional representation, your non-visual mental map suffers no spatial limits. "

My visual representation is definitely 3 dimensional and I am not sure what you mean by abandoning it? I mean, code is statement blocks and controll flow elements. It is data and connections between them at execution. I can imagine, not being visual gives you much more focus and fewer distractions on the mental map, but is it really different?


I don't have this super-power but I can imagine that it could work.

When I'm digging into foreign code especially, I always find myself overwhelmed by the intricacies of the call stack. I get lost in layers, confused by indirections, I lose track of where I was a few steps before or what specific conditions lead me down a particular path.

I guess that if you're blind you need to become a lot better at keeping track of things for basically the entirity of your daily routine. Imagine merely cooking without sight. You need to know where all the ingredients are, you lack a lot of feedback on what you're doing to catch mistakes (did you add the pasta to the water yet or not?). I assume that you need to constantly build a fairly detailed map of the world in your mind where the rest of us can just wing it, basically. I can believe that this skill might make you a better coder.


"I don't have this super-power but I can imagine that it could work."

I don't think it is a super power, I believe every programmer does that, I just maybe do it a bit more conscious. And when I have to process new code, that just takes time, like for everyone else .. and doing it in a hurry will mean a wrong mental model.


> My visual representation is definitely 3 dimensional and I am not sure what you mean by abandoning it? I mean, code is statement blocks and controll flow elements. It is data and connections between them at execution. I can imagine, not being visual gives you much more focus and fewer distractions on the mental map, but is it really different?

An earlier commenter described it like a directed graph. I think that's a pretty good way to describe it. As for whether it is different, I don't know how others think. But one thing I do know is that this is how my mind worked for non-analytical thought before I went blind and still does. I'm just more aware of it now that I don't visualize as much. I only really visualize things these days as a way to communicate ideas with sighted engineers.


Hm, maybe the word "visualize" is distracting, from what I meant. When I programm, I also do not visualize it, in the meaning, that I have a graphical picture in my head. (At least not all the time )

I mean the mental graph and flow of the program in my head. And being a visual person, I can translate it to somehing graphical, but it does not neccesarily mean it is graphical. But thinking about it, gave some me interesting insights. ("insights" again, a word from a visual dominated world)


I imagine it is like representing code with a flowchart. Suddenly you are a lot less restricted by your mental representation and you can draw arrows any way you want.

I often experience something similar, even though I'm not blind: When I'm doing algebra I sometimes get stuck 'looking' at the equation and systematically trying patterns on it. But what works much better is to really load the equation into your brain so that you have an intuitive idea that guides your pattern matching. If you are blind, I suppose you always need to load it up.


Would you be willing to chat privately with a colleague of mine going through a similar lost of sight transition?


Wow. Great response. These comments are why I come here. Thank you!


Just wanted to say, Thank you!


This makes me wonder if we would benefit from forcing children to occasionally train their brains via partial sensory deprivation. Maybe there are aspects of the human brain that go unexercised because we strive to constantly and always use all of our senses for everything simultaneously as much as we possibly can.


I recommend using a tiling window manager - they allow you to organize windows logically, rather than spatially.

I have also written some plugins for using Vim (text editing) and Weechat (IRC chat) with speech synthesis:

https://git.sr.ht/~sircmpwn/dotfiles/tree/master/lib/vim/vim...

https://git.sr.ht/~sircmpwn/dotfiles/tree/master/.weechat/py...

And I have a script for Sway (a tiling window manager) which also gives you audible cues:

https://git.sr.ht/~sircmpwn/dotfiles/tree/master/bin/swaytal...

All of this is somewhat incomplete, but it's a good starting point if you want to get used to them and work on improvements while you're still sighted. Good luck, and let me know if I can be of service.


Are you sighted?


Not perfectly, and not for long. I wear glasses, but they only do so much, and my vision worsens every year. I use some light assistive technologies on the daily - higher contrast, large fonts, zooming in on things. To test the tools I linked to, I spend the occasional workday with all of my monitors turned off, relying on these tools to get work done. I also have a braille reader that I occasionally pull out.

I have a different philosophy and approach to using computers than most, and that affects my views on accessibility. Stapling a screenreader onto a graphical application, for example, to me seems like the wrong approach. Text-based applications are much more accessible, and these are my bread and butter. To this end, my work on accessibility involves making more information available as text, organized logically rather than spatially, and making it easier to access and manipulate that information with vision impairments (and other sorts of impairments, too).


As someone preparing for this, too, I still have no clue on how to rasterize the code quickly. Voice always feels inefficient and braille feels like a joke when it comes to the amount of information being displayed. Do you have suggestions? Do you transpile code?

I also use VIM because it feels like the best case of voice integration or braille integration...but I have no source for how to actually do this properly. Are there good reading materials on this?

Currently I am trying to build a semantic web browser, also with the intention to filter out all legacy crap CSS that prevents interaction with the content [1] and the idea of being able to train CNNs with the content... but when it comes to code, my memory of it seems to suck so hard that I always have no clue of what I wrote the day before.

[1] still alpha as hell: https://github.com/cookiengineer/stealth

(Also a long time observer of your work here. You are one of the good guys. Stay awesome!)


Even with the aid of sight, I don't read an entire codebase at once. I follow the logic, a few lines at a time, as it moves through the parts I'm interested in. I use grep to find the things I ought to read. With this approach, I build a mental model of the codebase quickly, with lots of blanks - but with detail in the areas I'm focusing on.

As for how to actually rig up braille readers on Linux, check out BRLTTY. It's pretty straightforward. I was working on an Alpine Linux spin which was more accessible out of the box, but I got discouraged by various circumstances and shelved it.


I think your use of the term "braille reader" might be causing some confusion. I know you use brltty. But do you use its text-to-speech output, or do you use a refreshable braille device that moves little pins up and down to form braille cells? If the former, I'd suggest referring to brltty as a screen reader.

Also, do you run brltty on a Linux text console, or in a terminal window in your Sway session?


I use a refreshable braille display. I use it mainly on the Linux console, but I've been thinking about rigging something up for use with Sway.


Wow, I underestimated your seriousness about adapting to alternate output methods. Did you have to buy that braille display at your own expense? As I'm sure you know, they're expensive. How recently did you start learning braille? Has it been a challenge for you to learn to read it with your fingers?


I bought a cheap used one off of eBay, it wasn't too expensive - a few hundred bucks. I know that nicer ones can get up there, though. Learning braille was an unusual challenge, but I didn't find it especially difficult. It only took a couple of days to become reasonably proficient with it. The braille display I ended up with is pretty nice despite its price tag, it has a good finger-feel to it and has a nice set of basic features.

The most difficult thing for me would be learning advanced levels of braille, which involves memorizing shortened forms of many words, but I reckon I can get away with just using long-form for a good long while.


> I also use VIM because it feels like the best case of voice integration or braille integration

Check out Emacsspeak: http://emacspeak.sourceforge.net/

It’s written by T.V. Raman, a blind engineer at Google by way of Cornell.

Among other things, with it you can just use the built-in Emacs browser!


Would you be willing to chat privately with a colleague of mine going through a similar lost of sight transition?


(BTW, stealth sounds awesome!)


Thank you :3 It’s still a ton of work. I heavily underestimated the networking and parsing part...and had to invent a testrunner that can test networking implementations with known buggy situations (e.g. simulate fragments like on 2g slow mobile situations).

I totally switched my workflow to Test Driven Development due to the last 6 months parsing CSS and http1.1 responses.

It’s amazing how much server infrastructure violates w3c specs and recommendations. Something like partial content (206) is mindblowingly crappily implemented on servers these days when it comes to keep alive sockets and multiple range requests. Some servers reply only with chunked encodings, even for frames with less than 256 bytes (looking at you, cloudflare dns), some only send back a single stream...some just send back ranges without headers...

And I only support http1.1 as of now, because http2 and 3 are both kinda undebuggable and there’s no reference-class testsuite to test against implementations.


That sounds brutal!

Still, it seems like a very worthwhile project. I'm fed up with modern browsers myself (the "megabar" on Firefox 75 is, somehow, the last straw for me.)

I'm going to keep an eye on Stealth because it sounds like the perfect browser/proxy engine for an experimental UI I'm working on.

Cheers! Good work and good luck. :)


On the days that you turn your monitor off, how do you do email? Do you also have a speech synthesis plugin for your aerc email client? Or do you use it with a generic screen reader for terminals? If the latter, which screen reader?

I've found that, counter-intuitively, a fully accessible GUI program with a good GUI screen reader is easier to use than a screen-oriented terminal program with a screen reader. The trouble with the latter is that the user has to understand visual concepts like highlighting, the meaning of special characters, etc.

Of course, an application or plugin that's tailor-made for doing a particular task with speech output is better than either of those other choices -- as long as you don't have to use an application that's overall inferior (e.g. using the Emacs/W3 web browser with Emacspeak as opposed to a mainstream browser).


>On the days that you turn your monitor off, how do you do email?

Poorly. I want to improve aerc in this respect. For the time being, I use a mix of my braille reader (brltty) and piping emails into vipe so I can use my vim plugin to read them.


Have you ever used a conventional GUI screen reader? Something like NVDA for Windows, VoiceOver for Mac or iOS, Talkback for Android, or Orca for GNOME? Reading a web page or an HTML email with one of those might give you a different perspective on what's possible, and specifically, how much better the experience of reading a hypertext document with a screen reader can be, compared to something like BRLTTY.


I have used Orca, and I can't stand it. The main advantage is a global place to route text for speech synthesis, but I simply hate using screenreaders to use applications which are not designed with accessibility in mind. There are few better solutions for browsing the web, though. I've been meaning to try lynx with brltty.


> I simply hate using screenreaders to use applications which are not designed with accessibility in mind.

I can understand that. I think many of us have just accepted that it has to be this way, because we're a minority and we want to have all of the advantages of using mainstream applications (economies of scale, active development, not being at an extra disadvantage compared to sighted peers, etc.).

Of course, you don't fit the profile of a "mainstream consumer" when it comes to computers. In particular, I gather that you take full advantage of the hackability of free software. So using custom TTS plugins as opposed to a clunky generic screen reader is just an extension of that overall approach to using computers.


If Orca is the only GUI screen reader you have tried, I can understand you being put off. Try NVDA on Windows with Firefox and you'll never look back. NVDA is open source btw.


I would strongly recommend switching to Windows - the screen readers are just so much better than anything on Linux. I agree with the other person who said that GUI programs are easier to use with a screen reader. At least on windows, this is definitely the case.


You know that Drew is the maintainer of sway, right? I don't think switching to Windows is an option here :P


From my experience, using the web in lynx or with emacspeak isn't an option in 2020. You just have to have a modern browser in conjunction with a well maintained screen reader. I wish I could use linux for everything, but if I want to be productive on the web I have to use Windows.


> From my experience, using the web in lynx or with emacspeak isn't an option in 2020.

Even 20 years ago the limitations of those options were clear to anyone who was willing to face reality. I was in denial for a while. (Note: I have limited vision, but I spent a lot of time helping blind people use Linux back then.)

Of course, Lynx and Emacs/W3 aren't the only alternatives. I think an interesting option would be a specialized browser UI based on headless Chromium.

In any case, I'm guessing Drew won't give up his free-software ideals easily, if at all. And he's a capable enough hacker that I'm sure he'll come up with a solution that works well for him.


I would love to read more about this in your blog! Sway is my daily driver and I love it!


It seems like this level of organization would be beneficial to the sighted community as well. Is this a value that you use to market to customers?


Yes.


Which do you prefer, screen-readers or braille readers? Also, how long did it take you to be able to feel the braille effectively?


I don't like screen readers, but I liked teaching applications to speak themselves based on text commands. I like to combine this with a braille reader and use both in different contexts. For example, a braille reader is more unambiguous with punctuation, capitalization, etc, but speech synthesis is necessary to catch my attention for a notification from a non-active application, and is more comfortable for reading natural language. It's worth reiterating, though, that I don't need these tools (yet), so people who depend on them daily may have a different opinion.

Braille is easy, I could read it reasonably well with just a couple of days of study.


Would you be willing to chat privately with a colleague of mine going through a similar lost of sight transition?


As a side note, weechat runs beautifully in Docker. I access it with Glowingbear but though a terminal it would also be trivial.


Rob Pike's Acme editor is another powerful tiling editor you can check out.


As a huge fan of Plan 9, I would strongly recommend against acme for visually impaired users. It's highly mouse-driven, requiring a spacial understanding of the interface and the ability to see where the mouse is and coordinate its movement to execute commands.


Good point - didn't think that through enough before recommending.

Just thought of as nice tiling editor.


Thoughts on Sam? It seems to me like it might be a much better fit, esp. given the ed-like command only mode. (And I always liked it better than acme anyway...)


I started my career at Bell Labs in the 80's and one engineer there made a huge impact on me. I learned more from him that any other single person I have worked with since. He suffered from progressive myopia so he had a rig with a camera and a big screen and a huge monitor. He coded slower than everyone else, but in a way, that was his secret power because his code almost never failed in testing. Also, he was brilliant and I learned more about hardware and software from him that I did in Masters EE program. Also, he had this insight about the balance between when to use hardware and when to use software that I think is somewhat lacking today (discussion for another day). A lot of people have said that he was the inspiration for the character in the movie Sneakers (Whistler) could be he was really well know but passed away some time ago. I recall a time when we had a big meeting with a lot of executives and he had long hair and a beard and after the meeting one of the big wigs said to me "who is the guys that looks like Jesus, he is a genius". Sorry that I am going on a bit, I cannot even begin to understand what you are going through, but I can tell you that you should not underestimate the impact you can have on others. I wish I had better words here, I wish I knew more about what to suggest to help you technically, but all I know is that someone dealing something with something similar to you meant a great deal to me personally and professionally. Best wishes.


Meat of your comment to me is this phrase, “he had this insight about the balance between when to use hardware and when to use software that I think is somewhat lacking today” - would you attempt to clarify what you’re expressing he would have said about balancing the design of how software & hardware work together? Thanks.


Sorry I have not been monitoring this.

Here is what I learned back then. What we were working on was an automated system to monitor data circuits, which back then were basically modems connected to a dedicated audio circuit. They called it a private line data circuit.

The problem back then of course was that you essentially needed a way to sample the audio modem circuit and do analysis of it. We had written some code on a Symbolics computer that could do the analysis (that was the old timey LISP type AI computer).

The trick was how to get the samples without a person having to jack in and record it or whatever. Today this seems trivial but it was more complicated back then. We brainstormed on it (he had a sharp ear BTW). We ended up building a device that had some custom hardware on it that then interfaced the ADC over DMA to a 68000 processor and could monitor multiple circuits etc. and fed that to the Symbolics which could predict all sorts of things.

Anyhow, it taught me how you could sort of combine the two to offload some stuff to HW but still keep the flexibility of SW.

I see a lot of similar types of trade offs today. Now of course we have FPGA's for hardware that can do a lot but you still need interface circuitry to the real world depending on the application. I have worked on a lot of projects where there is a tendency to always reach for software or vice versa.

What I think happens is you might have a company let's say company 'G' that is basically a software company and everyone they hire knows software really well. And they interview and hire more software people. Then if they were to let's say... buy a company that makes HW, perhaps phones, they have a difficult time. Or they try to build out a network and struggle.

Then maybe you have another company 'V' and they are good at building a network but they don't understand software so if they buy a software company it is difficult for them.

Then you have a company Apple that has always done both and they seem to make better choices.

I am sure a lot of you out there can think of similar examples.

This is just my observations based on my experiences. The more a team has a diverse set of experiences and background (HW, SW, Network) the better the chance of getting an optimal solution.


I think now it would be comparable to having good clue which workload should be server side vs which workload should be client side in web dev. Back end devs want to do all on back end and front end guys want to do all in front end. Then you get issues like API returning too much data that someone can abuse or hundreds of calls with ajax to setup something....


I'm interested in hearing more about that as well.


Me too.


> but in a way, that was his secret power because his code almost never failed in testing.

Reminds me of the stories about Stephen Hawking and how it seemed like he was just sitting there miserable but he instead he was thinking very deeply and how his disabilities forced him to be able to think visually in his mind and compensate because he would only get a quick chance to absorb input.


I always assumed Whistler in Sneakers was a reference to https://en.wikipedia.org/wiki/Joybubbles


I did as well. I can't find a reference at the moment, but I'm almost positive the authors of the screenplay said as much in the early 90s.


I am sure you are right, everyone on HN is always right about everything :) Probably those of us that loved Louie just felt like he was like that.


That 80s guy sounds like a genius, but this guy is probably not, like most of us. He may be intelligent and hardworking, but a genius is on a different level. Another thing is that progressive myopia is something different than going completely blind and deaf. That is this guy's fate. How do you communicate then?


> coded slower [...] his code almost never failed in testing.

Huh. I've had this most of my career as well, especially in the latter years. I do consider this a secret power, but many of my colleagues (especially the younger ones) view it as a fault.


Me too, I have always been slower, not a fault at all. If you code slow and make fewer mistakes then you may end up being faster in the end than someone who cranks it out but has to redo it 3 times when problems are found in testing. Of course there are I am sure super programs who can do both, but I haven't seen one yet.


I have been blind since birth. I recommend downloading NVDA, a free screen reader for windows, and getting used to using it for basic computer use. Getting used to hearing the speech as fast as possible is key to being able to use it efficiently.

You said you have hearing loss. Is it bad enough to make speech output useless? If so you would need to learn braille.

I would never try front end design as I have no idea what it should look like, but you may still be able to do it if you have an image in your head of what you are trying to achieve. You would just have to ask someone to check it.

Python is not a problem with screen readers, contrary to what someone else said. The screen reader can be set to report the indentation level. In fact I can't think of any text based language that wouldn't be usable with a screen reader. Tools are a different story. Some work and some don't.

Feel free to contact me if you would like any more information, weather it's about computers or not. rob at mur.org.uk


Not only is Python not problematic when using a screen reader, but there are blind programmers who gladly choose Python over other languages. The NVDA screen reader, mentioned in the parent comment, is developed primarily by blind programmers, in Python. The Orca screen reader for GNOME was written by a blind friend of mine, in Python. I have another blind friend who wrote and sold several accessible apps for Windows and Mac using Python. And those are just a few anecdotes that I'm aware of.


I've come to understand that this is actually a very powerful reason to actually use tabs instead of spaces for indentation. I used to have the irrational preference of using spaces, but once I learned that screen readers work better with tabs, it was a no-brainer from then on.


I don't mind weather it is tabs or spaces, as long as they are not mixed.


Are you telling me a programmer used their blindness to trick me into thinking their personal preference was absolutely necessary!? Man this war is ruthless!


I don't know, maybe it made a difference to them, but NVDA just says for example "4 space" or "1 tab".


You might also want to check out "dragonfly", which is a Python based system for integrating applications with speech recognition and synthesis, and includes a large library of pre-existing modules for many popular applications:

https://pypi.org/project/dragonfly/

Project description:

Dragonfly offers a powerful Python interface to speech recognition and a high-level language object model to easily create and use voice commands. Dragonfly supports following speech recognition engines:

Dragon NaturallySpeaking (DNS), a product of Nuance

Windows Speech Recognition (WSR), included with Microsoft Windows Vista and freely available for Windows XP


Fortunately my hearing loss is stable and correctable using hearing aids.


This was my biggest concern. I'm so relieved to hear this and have very high hopes for you!


This is a bit off-topic, but may I ask: how did you get introduced to programming?

Feel free to not answer :D


Basic on a C64 in the 80s. I had just enough sight to read the screen if I set it to all caps and adjusted the contrast on the TV.


Biggest advice, start programming with your monitor covered up by a sheet or turned off now while you still have the option to turn it back on to figure out what you just did.

Gradually have it turned off for longer periods without turning it on to see what's happening until you can do it without seeing it at all.


On Macs, there's a screen curtain feature that can be turned on and off. The shortcut is FN+CTRL+OPTION+SHIFT+-. It might be different on older versions of OSX, used to use the right option key, but I've forgotten the keystrokes.


It looks like the shortcut depends on what one's VoiceOver triggers key are.

Here's the Apple doc for anyone interested: https://support.apple.com/en-us/HT201443


NVDA on Windows also offers a screen curtain. So does TalkBack on Android.


I don't have any answer for Zach (the poster), but I feel a big need to share my feelings about this.

I'm sipping a decent morning coffee, I'm in Japan, whereas my home is San Francisco. I've been here for ~5 weeks, trying to escape the coronavirus disaster that is unfolding in the US, and trying to enjoy spring in Japan as well. I'm lucky enough to be able to afford a few weeks abroad without too many money worries.

Work is a disaster. The last ~4 years of my life have been both unlucky (e.g. recently I was offered a highly lucrative executive job in SF, only to see the CEO change his mind on the whole operation - not on me specifically - at the last moment) and badly handled by me.

My professional career essentially came to a halt and so far didn't recover. I still keep my cool, but I am a bit worried about what's going to happen in the future, especially given the current situation with the virus.

And now, in all of this, few minutes ago I've read "I'm going blind, how to prepare", and my perspective suddenly changed. It's as if something clicked, and I can now "see" the world as it is.

I'm incredibly lucky. Most of us here are incredibly lucky. Zach, you probably didn't mean it, but today you somehow triggered a very positive reaction in me. I wanted to let you know.

I also wish you best of luck with your condition, and hope that you will manage to have a great life despite a deteriorating health.


This experience has been so humbling. I grew up in a rough household and when I had barely scraped myself past high school graduation and gotten work as a SWE and eventually made it to some fortune 50 companies I thought I had life figured out. Then I started bumping into things and struggling playing my favorite games. My eye doctor casually telling me I have RP and discovering I'd be going blind in the parking lot post appointment really knocked me on my ass.

But there are silver linings. I was born with severe hearing loss but it's stable and corrected with hearing aids. I don't have any balance issues (I have USH2A, which only results in hearing/vision loss). I'm so fortunate to have experienced all that I have and all that I will. Thank you for the kind words.


> discovering I'd be going blind in the parking lot post appointment

That is devastating.

Did you get a second opinion?


Yes this was 2 years ago, I ended up at Columbia Ophthalmology in NYC and they were able to get me DNA tested after another local doctor failed to assist me in doing so. USH2A is the name of the specific condition if you're curious.


I have two mutations on USH2A. It would be spectacular if someone could repair the whole gene.


Really appreciate both of your stories. Any chance you could share the specific mutation? Such a difficult to understand condition given that both of you have one or more mutations in USH2A but have different stories around hearing loss


Photobiomodulation seems promising

PHOTOBIOMODULATION IN INHERITED RETINAL DEGENERATION (2012)

"A growing body of evidence indicates that exposure of tissue to low energy photon irradiation in the far-red to near-infrared (NIR) range of the spectrum, (photobiomodulation or PBM) acts on mitochondria-mediated signaling pathways to attenuate oxidative stress and prevent cell death. "

https://dc.uwm.edu/cgi/viewcontent.cgi?article=1007&context=...

I have no experience with it, just lots of reading during this downtime.


>I'm sipping a decent morning coffee, I'm in Japan, whereas my home is San Francisco. I've been here for ~5 weeks, trying to escape the coronavirus disaster that is unfolding in the US, and trying to enjoy spring in Japan as well. I'm lucky enough to be able to afford a few weeks abroad without too many money worries.

I would say you're selfish or dumb enough to be moving in a time where people should just stop those behaviors and stay wherever the fuck they are... And this is not even considering you know, GHG and stuff.

But globally this comment is so full of oneself I'm amazed.


I had a similar experience with my career, and after a lot of trying and moping, I recognized it as an opportunity to work and do what i love, without compromise. is there anything that you've been dreaming of working on?


Yes, exactly this. I am working on this. Nothing big to announce yet, but slowly getting there.

What is it that you love doing? Mind sharing a bit?


I would recommend trying to study up on Section 508 compliance https://www.hhs.gov/web/section-508/index.html. It's a set of rules that all government orgs in the US must follow for making sure their content is accessible. If you do end up becoming visually impaired you'll end up with a unique perspective on building accessible websites.


Are there consulting firms that specialize in 508 audits? Seems like a potentially lucrative field given the activist lawsuits that happen in this area.

https://www.natlawreview.com/article/ada-website-litigation-...


Yes, there have been professionals offering all types of services for 508 ever since it came out. Might be wrong, but none of the companies I have seen offering these services seems notably successful as a result of their 508 services. Compliance based services are rarely a money maker, though in some situations given predictable need and clients, compliance based services are frequently a way to build relationships that lead to more lucrative deals.


The W3C's Web Content Accessibility Guidelines (WCAG) are the basis for many accessibility laws and more useful for actually learning what makes a site accessible.

If you don't already place an emphasis on using semantic HTML markup, learn more about that, it's the foundation of accessible design.


Sighted developers should use a screen reader on their own sites to get an idea of what's up. Very simple changes with semantic HTML can make a world of difference. It's sometimes as simple as swapping tags for their semantic counterparts.


Most US universities need to comply too


Yea my first programming job was translating word docs to accessible html for universities. The jobs around section 508 compliance seem mainly government related


Legally blind, I'm a full stack engineer (and a solution/security architect) with extensive experience in building both backend and frontend (web) systems. I work on Windows with Jaws screen reader. I use autohotkey extensively to super charge my productivity. I had worked as employee with Microsoft (and others) in the past and have been running my consultancy cum product company since 2016. Email in profile

- You can do frontend coding but certainly some assistance is needed for verifying the UI design. In any decent sized project, Personally I prefer my sighted colleague to handle look and feel (mainly the CSS part - though I know CSS) as I feel it's not a productive use of my time. It's always better to have a UI specialist anyway. FE devs have lot of other things to do especially when it is SPA based.

- Visual studio is good for development and debugging (for .net related languages at least). If you're on windows use autohotkey and setup shortcut keys and hotstrings to automate repetitive actions and text. For instance I prefer bash for using Git and have setup commands like 'gtcom' which expands to 'git add . (newline) git commit -am ''. I just have to type the comment then. Since you'd be working exclusively via keyboard it's important you do more with less hits to reduce strain on your wrists.

- Another important thing is to be able to find alternatives to UI tools your colleague are using but which could be highly inaccessible. Your programming skills and knowledge of system internals will help you with that. Do not settle with any tool which decreases your productivity considerably just because the team is using it, as you'll be judged based on your deliverable and not what tools you used.

I second what @kolanos has written. Programming is mostly a mental job (no pun intended) and everybody has to load a representation of the program in head before one can start fleshing out good code. PG has also written about it


> Programming is mostly a mental job (no pun intended)

I'm drawing a blank. What's the pun here?


The term mental is often used in the region I live to indicate psychologically disturbed person. So clarifying that I didn't mean anything other than the literal meaning of mind related job.


I think the pun has to do with the word 'job', which is an abstraction used for CPU scheduling, among other things.


'mental' can often also mean 'crazy'.


Mental: relating to disorders of the mind.


Would you be willing to chat privately with a colleague of mine going through a similar lost of sight transition?


sure, email in profile


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: