Jamon Holmgren

2024 2023

On Tech Opinions

February 27, 2024

I continuously see strong opinions by Twitter devs, said with authority and slamming the door shut on various technologies. I think it’s hard for newer developers in particular to see through this and understand who to follow and what opinions to trust.

Here’s how to cut through the noise.

Nothing substitutes for shipping real world software to users.

NOTHING.

Not even YouTube videos. (Shameless plug: go follow my channel at quests.jamon.dev.)

They’re useful for exploring, for learning, for developing nuance and depth in a subject. But they’re not useful for developing an actual useful opinion about a technology.

Here’s how I would recommend you develop an actual useful opinion about a technology.

(Hint: while you’re in the middle of this process, it’s okay to say “I’m still researching and learning about <technology>. My current framing is <opinion>, but I need to know more before I can say that more definitively.)

Don’t assume your intuition is 100% accurate. It’s okay to go “ew” at first glance, but don’t mistake that for anything real. Go into it knowing that you may have things to learn and that this journey will either support your initial reaction or you’ll learn something — both positive results.

Our intuition is noisy and unreliable. When I first saw JSX my intuition was “ew”. Mixing HTML and JS? Can’t run it in the console? Needing to rename my JavaScript files “.jsx”?

Many of you had the same reaction. And we were wrong about JSX, because our intuition wasn’t reliable there. JSX is actually a very useful technology and has been used to build all kinds of complex software now.

Learn the context around the technology. Realize there’s almost always a reason something is done the way it is. Learn the history, what tradeoffs were considered, what bugged the creators of the technology about existing solutions.

Learn how it actually works, at least at a high level. I can’t tell you how many times I’ve heard people issue very strong opinions about LLMs, for example, without understanding the slightest thing about neural nets and transformers and self-attention.

It’s hard to take them seriously.

Go ship something real with it, to actual users. Try to actually use it in a way that the creators of the technology would agree is the right way to use it.

And learn the tradeoffs. Because it’s always tradeoffs. What’s good, what’s bad, what is promising but underdeveloped?

Build in feedback loops. The only real way to get good at something is through tight feedback loops and iteration. Having someone you can bounce ideas off of and provide feedback as you go is invaluable. It’s especially helpful if those people have diverse technical perspectives.

For example, when I was writing my article about Flutter, I asked Luke Pighetti and Theo to review it. Both gave me very valuable feedback and also became friends in the process, people I can trust to give good insight.

Once you’ve done this, you have a really good foundation to talk about the tradeoffs of that particular technology. It’ll also give you a lot of useful perspective for other technologies in the future.

BUT…

…technologies mature, they change, and they evolve.

So, realize that your opinion, if you’re not staying up to date after your initial research, can fall out of date.

I’ll give you an example.

I tried react-native-elements back in the day. It was kind of kludgy and had very few useful features. So I kind of dismissed it as not a particularly good option for a React Native UI kit, and I would tell clients that.

Then one day a potential client told me he’d like to use it. I said I hadn’t used it in a while and would need to see what the latest and greatest version was like.

It turns out that version 3 had been released with a lot of really great improvements and it was actually … good!

Some of you really want more examples, so I’ll give you an opinion which is based on me going through this process (above) over many years: Redux.

We used Redux exclusively in the beginning for several years. We switched to MST about 5 years ago, but still use Redux on projects and have continuously for 8 years. We’ve also used RTK on multiple projects.

Here’s my opinion about Redux:

However:

So, my opinion (and the majority of my team) is that MobX-State-Tree is nicer to work in for most projects than Redux.

Sidenote: if you want to watch a 15 minute intro to MobX-State-Tree, I made one here:

These are real world observations that led to these opinions. These are things that I think I can safely be pretty opinionated about. But even then, I am open to being wrong about it and open to things changing over time. And other developers may prefer the tradeoffs of Redux over MST.

There are way too many technologies out there to research them all. So, what if you don’t have time to do all of this? Can you still have an opinion about a technology?

How I approach this is by … well, just saying that I don’t have a lot of experience with it.

For example, if someone were to ask me about XState, I would say something like this:

“David and his team are brilliant engineers. I would tend to lean toward trusting their opinions on patterns and software. With that said, my previous forays into XState haven’t been successful. I think it was because I didn’t take the time to really learn the best ways to approach state machines and also was probably trying to use it in ways it wasn’t intended to be used. I also think it was a little too steep of a learning curve and that hurt adoption. I also think the problems it solves don’t tend to be ones that are immediately top of mind for programmers. With that said, it looks like they’ve made some really great improvements in recent versions to simplify it, and I’d be willing to try it again sometime.”

This gives the facts and some opinions without slamming the door shut on it.

Ultimately, everything in tech keeps evolving. Your best bet is to invest in your own knowledge and experience. And ship code!


Backing up Google Photos to Amazon Glacier

January 3, 2024

I have a LOT of photos in Google Photos.

My wife and I started taking a few digital photos (mixed with regular film photos) when we started dating in 2002ish. But we really didn't start taking a lot of photos in earnest until 2005, when our son Cedric was born.

Jamon standing on a ridge in 2002
One of the earliest digital photos of me, standing on a ridge in Lava Canyon in southwest Washington state in 2002.

Since that time, we've taken thousands and thousands of photos and videos, amounting to just under a terabyte of data. Initially they were all uploaded to Google Picasa Web, but then that was migrated to Google Photos.

After deliberating about this for quite some time, I finally decided to back up our entire archive. I chose Amazon Glacier because it's very cheap long-term storage.

Downloading the archive

I started by buying a 2 TB Crucial external drive that I could connect to my Mac's Thunderbolt/USB-C port. Having an external drive served two purposes: one, I don't blow up my Mac's hard drive when I download all these photos and videos, and two, now I have another backup -- this one locally.

I then went to Google Takeout. (Make sure you're in the right Google account if you're signed into multiple!). In the "Select data to include" section, I chose the "deselect all" button first, then scrolled down to Google Photos and checked the box next to it. Then I scrolled ALL the way to the bottom and clicked "Next step".

In the "Choose file type, frequency & destination" section, I chose the "Send download link to email" option. It would be amazing if they had a way to choose an Amazon S3 bucket (or better yet, Glacier itself), but they only support Drive, Dropbox, OneDrive, and Box as of this date. I chose the "Export once" option, .zip, and for file size I chose 10 GB. (I experimented with 50 GB but that was tough to download and upload effectively.)

After that, I waited a few days for Google Takeout to send me a link.

Once I had a link, it brought me to a page where I could download the ZIP exports one by one ... about 85 of them. I clicked to download about two or three at a time, putting them on the new external drive I bought, and let them download. It made me log in nearly every time which was annoying. Also, you only have about a week to download them, and with how many I needed to download, I cut it kinda close.

While you are downloading, you can prepare for uploading with the following instructions.

Uploading to AWS Glacier

I already have an Amazon AWS account, but if you don't, sign up for one. I won't walk you through that. If you're not able to sign up then this is probably too technical for you.

Here are the steps I took to create the credentials and Glacier bucket:

  1. Log into the AWS Console as a "root user"
  2. Go to the IAM security credentials section (you can choose a region in the top right, but I just left it as "Global" for this section)
  3. Create an access key and secret there and copy it somewhere.
  4. Install AWS's CLI (these instructions are for macOS): brew install awscli
  5. Log in using the access key and secret: aws configure
  6. Change directories into wherever you downloaded your backups. For me, it was in an external volume: cd "/Volumes/Crucial X8/Backups/JamonAndChyra-GooglePhotos"
  7. Create a Glacier bucket in the region of your choice: aws s3 mb s3://bucketnamehere --region us-west-2
  8. When your zip files are done downloading, you can upload them either all at once like this: aws s3 cp . s3://bucketnamehere/ --recursive --exclude "*" --include "takeout-*.zip" --storage-class DEEP_ARCHIVE ...or one at a time like this: aws s3 cp . s3://bucketnamehere/ --recursive --exclude "*" --include "takeout-*-001.zip" --storage-class DEEP_ARCHIVE ...or in blocks of 10 like this: aws s3 cp . s3://bucketnamehere/ --recursive --exclude "*" --include "takeout-*-00?.zip" --storage-class DEEP_ARCHIVE aws s3 cp . s3://bucketnamehere/ --recursive --exclude "*" --include "takeout-*-01?.zip" --storage-class DEEP_ARCHIVE aws s3 cp . s3://bucketnamehere/ --recursive --exclude "*" --include "takeout-*-02?.zip" --storage-class DEEP_ARCHIVE

This part is the most painstaking.

Restoring the backup

I haven't yet had to restore from a backup yet. Theoretically, you could download using a command something like this to download it to your local folder: aws s3 cp s3://bucketnamehere/your-backup-file.zip . --storage-class DEEP_ARCHIVE

Good luck!