all micro contact rss

Good vs. Better at Bad

There’s a particular point I’ve been trying to articulate about HomePod vs Amazon Echo and all the others that I haven’t quite figured out how to express succinctly in a tweet. So I’m going to resort to charts.

If we wanted to compare HomePod and Echo as “Smart” devices, digital assistants—whatever term you want to use—I think most people will agree that Echo has an advantage. How much of an advantage is up for debate, but let’s be extremely generous to the Amazon fans and say that Echo is twice as good as HomePod in this area.[1] That sounds like a big deal, right? Echo is soooo much better at being a voice assistant. Twice as good! Apple should be quaking in its boots.

homePodChart0@2x-1

But better is a relative term. You can be better and still not be good, right?

Let’s switch gears for a moment and compare the two devices as speakers. Here we get a different chart. Personal differences in taste aside, only a complete lunatic would say that HomePod isn’t significantly better than Echo at being a speaker. But again, how much better is up for debate. I don’t think it’s totally unreasonable to say that HomePod is twice as good as Echo at sound, though. [2]

So we’ll add that to our chart.

homePodChart1@2x-1

But we have the same problem again. We know one is better than the other, but we don’t have a sense of where “good” falls.

Without knowing where “good” is, anyone can wave either one of these comparisons away and chalk it up to priorities. Some people care more about the sound quality. Some people care more about the smart stuff. Sounds like a toss-up, right?

But there’s a threshold of quality where people consider something “good.” Where the general public—not just a niche of enthusiasts—agrees that a technology has gotten significantly good enough to make it ready for prime time.[3]

We reached the “good” threshold for speakers decades ago. The subcategory of affordable bookshelf speakers got there sometime in the past few years.

But we’re nowhere near “good” yet when it comes to digital assistants.

I say this with no small amount of respect for how hard this technology is and how far it has come recently. I’m as excited as the next geek when it comes to the future of AI and voice recognition. I think it’s all super cool.[4]

But it’s not good. Not for most people. It’s barely past the point of being a parlor trick, if we’re being honest. Answering trivia questions? Turning on the lights? There’s a reason even early adopters generally resort to using these devices for a small set of simple tasks. That’s about all they can do reliably.

I firmly believe we’ll get much better voice assistants eventually, but the fact of the matter is “good” is a long way off.

This is the reality of our chart.

homePodChart2@2x-1

There are a few things that have to happen before voice assistants are going to stop being the butt of an SNL skit. First and foremost, they need to learn when we’re actually talking to them.

Listen to any tech podcast hosted by a voice assistant fan (there are lots of them) and wait for hilarity to ensue as they say the word “Alexa” or “Siri” in conversation (or any word that sounds remotely like those trigger words), and their Echo or iPhone proceeds to respond as if a request were being made. This is followed by five seconds of “Alexa STOP!!!” And laughter from the co-host.[5] Never mind the complaints that come later from listeners as all of their Echos and iPhones go off.

This happens so often that many hosts have resorted to substitute expressions. (Hey, Dingus!) It’s a running joke, even amongst the most enthusiastic of early adopters.

Have you ever noticed this never happens to Chief O’Brien on Star Trek? He can say “Computer, how long before the Dominion ship is within weapons range?” And he gets the appropriate response. But then he says “Captain, I’m going to need to tap into their central computer.” And the computer does nothing.

That’s because science fiction authors, unlike Alexa fans, understand that sensible people would not depend on voice control until this basic requirement was met.

Siri, Alexa—whatever name Google Home responds to[6]—need to be that smart. Non-geek humans are going to laugh you out of the room in the meantime. The device can’t be simply listening for “magic” words. It needs to know when it is being spoken to and more importantly, when it is not. Human beings are very good at this, and we expect the same level of skill from anything we talk to.[7] This is not an easy thing to get out of a computer, clearly. Because it hasn’t happened yet, and people have been working on it for decades. But until we resolve this, digital assistants are annoying more often than they are useful.

And that brings me to the next key word in this discussion—usefulness.

Here’s what I really want out of a virtual assistant: Assistance. Not trivia questions. Not timers. Utility. It needs to actually make my life significantly easier.[8]

Let me give you an example. And there’s no doubt in my mind this will be possible someday.

“Alexa, book me a flight for Peers Conf.”

If I had a human personal assistant, that’s all I’d need to say to get this task done. They would go straight to work, and I’d get on with my day. But in order for Alexa to do this, all of the following would need to be in place:

  • Alexa would need to be able to search the web and figure out that Peers Conf is a conference happening in April in Austin, TX. Not just to report that back to me, but to understand that this is the reason for my trip.
  • She would need to figure out the dates for the conference, then take into account my usual preference to arrive a day early, and if the conference ends near a weekend, to stay through until Sunday evening.
  • She would need to know my preferred flight times, the airlines where I have frequent flier accounts, that I fly nonstop whenever possible, that I’m starting in New York, but I hate Newark airport, and that I prefer an aisle seat.
  • If she had any conflicts between any of my preferences, she’d have to follow up: “There’s no available flight after 6:30am on Sunday morning. Do you want to extend the trip to Monday or take that early flight?” (Also, if I’ve moved on and started watching TV or listening to music, or I’m just talking to another person in the room, she would need to be courteous and not interrupt me. Perhaps she would send me a quick text or push notification and wait for my response.)
  • After settling all of this, she’d have to compile a summary and send it to me in an email or push notification to my phone so I could confirm. Once confirmed, she’d have to be able to book everything automatically with the correct credit card and send me the receipts via email and place the appropriate pass into my electronic wallet.

That is a digital assistant. And any device that could accomplish this reliably would be as popular as smart phones are today.

Wake me up when Alexa can do anything remotely this complex, and I’ll start to worry about Apple “falling behind” in this space.

Because here’s the thing: that level of complexity is not just a matter of gathering more data and training our AI models a little longer. It’s not a matter of third party apps. It’s not a matter of open vs. closed. It’s not a linear progression from where we are today to that. It’s going to take some major breakthroughs, deep connections into my life—financial, personal, and historical—that require user trust. (Amazon is never going to put this together by looking at my paper towel order history.) Not to mention the agreements between companies in several different industries required to have a digital assistant make purchases on my behalf. Heck, Amazon doesn’t really have a strong motivation to make something like this happen, because it would stand to gain nothing from the transaction.[9]

So yes, other platforms may currently be “better” than Siri. But when none of the platforms is good, what difference does that make, except to a small niche of enthusiasts? By all means, enjoy the Echo if you want to live on the bleeding edge of voice assistants. But don’t try to convince me Apple is doomed in this space, or that I’m missing out big because I prefer to listen to a good speaker and set my timers with Siri.[10]

That’s why even though it’s too early in the race to tell who will come out ahead—and there’s no reason one winner needs to take all in this field, by the way—I wouldn't count Apple out in the long term for digital assistants. At least Apple knows the difference between a tech demo and an actual product. More critically, it knows to prioritize features where it can actually deliver something good, rather than something better at bad.


  1. In my limited experience with Alexa, mostly watching others struggle to get her to understand anything, I’d put it more like 10% better. But I don’t need Siri to be even close to make my point here, so I’ll concede this much. ↩︎

  2. You can fart music that sounds better than Echo, as far as I’m concerned. So I’d put this more at HomePod being 10 times better than Echo, easy. But again, I don’t need to prove that to make my point, so I’ll be generous to the opposing side once again. ↩︎

  3. There was a time when only enthusiasts thought personal computers were worth a damn. The general public thought they were expensive and not useful for much. And you know what? The general public was right. Enthusiasts eventually made computers that were good enough for the rest of humanity, but that took a while. ↩︎

  4. I’m old enough to remember “My voice is my password” on my Mac running System 7. Ask your grandpa about those good old days of voice recognition technology. ↩︎

  5. ProTip for podcasters: Turn off your Echo or HomePod before recording. See also, Do Not Disturb mode on your iPhone and Mac. Pretty basic stuff. ↩︎

  6. It’s telling I don’t even know the answer to this. Google, who by all rights should be miles ahead of both Apple and Amazon on this front, has done such a poor job of marketing their assistant that even a geek like me doesn’t know what to call it. ↩︎

  7. Heck, even my cat is pretty good at knowing the difference. ↩︎

  8. I'm aware that for some, just having the option to use voice, unreliable as it may be, does make their lives significantly easier. I'm talking about reaching a critical mass where the majority of people on earth get real utility from voice-activated devices. ↩︎

  9. The Echo is a loss leader designed to get you to buy more stuff on Amazon. Jeff Bezos doesn’t want to help you buy airline tickets. HomePod, on the other hand, is a high-margin piece of hardware that makes money directly, at least. ↩︎

  10. Multiple timers, Apple. Please. No excuse for that one. ↩︎

Using iPad for Long-Form Writing

I fought the notion of a mechanical keyboard for my iPad for years. Part of the reason was every keyboard designed for a tablet I’ve tried (including Apple’s own Smart Keyboard) is just not good. Small keys. Crappy feel. I’ve never been able to type a sentence on any of them without immediately concluding that they were terrible compared to the on-screen keyboard, let alone my MacBook Pro keyboard.

But the bigger reason I’ve always been opposed to external iPad keyboards is I just fundamentally believe a tablet is a superior form factor to a laptop—for the subset of tasks I do most often on my iPad.[1] And combining touching a screen with typing on a keyboard is, as Phil Schiller has suggested, ergonomically ill-advised.

So why, then, am I typing this with a mechanical keyboard on my iPad? Well, because I discovered that for prolonged periods of typing, where I want to do nothing else but type thousands of words for a blog post, a combination of Apple’s Magic Keyboard and iOS can actually be a better choice than my MacBook Pro.

Don’t get me wrong; I still believe strongly that for most uses, an iPad is a much better device when held in my hands than it will ever be when I put it into my Canopy and connect the Magic Keyboard. But that doesn’t mean a mechanical keyboard doesn’t come in handy for the very specific use of long-form writing.

Why use an iPad when you could just use your Mac?

For years, I asked myself this exact question, and I answered it simply by using my Mac. After all, typing long-form on a Mac is a great experience. But there are a few things that give writing on iPad a slight advantage.

First, there’s something to be said for a truly “distraction-free” experience. I use Ulysses in full-screen mode on my MacBook Pro, but even then, it’s way too easy to switch over to Twitterrific, Slack, or any other number of apps. I know it’s almost as easy to do the same on iPad, but for whatever reason, I don’t. I mostly stay focused on my writing, with only the occasional diversion when I need a break.

A Mac can be set up to run apps full-screen, but iPad does that by default.

My very aggressive approach to turning off notifications for just about everything on my iPad probably has an effect here as well. But I don’t really have that option on my Mac, because I often need those notifications while I’m doing my day-to-day work.

I suppose I could easily designate a second Mac to be a dedicated writing machine and get mostly the same effect. But a Mac you use only for writing is a bit like a shotgun you use only to kill flies. It’s way more machine than you need for the job at hand.

Besides, I get a lot more uses out of my iPad than just writing. Most of these involve performance on stage, or other tasks that are similarly in need of a distraction-free setup. Using the same device for writing makes perfect sense.

The second big advantage is battery life. I never give a second thought to battery life when I’m writing with my iPad. The same can’t be said of any laptop. And the more I use my iPad to write instead of my Mac, the more battery life my Mac will have for Photoshop, Logic, Xcode, and all the other things I can’t currently do on iPad.

Third, apps like Ulysses are just as good on iPad as they are on macOS. There’s no sense of using a “watered-down” version at all. Ulysses for iPad is as feature-rich as its macOS counterpart. And just as easy to use.

Finally, portability. When I want to head out to a coffee shop and just do some writing and nothing else, my iPad is always going to be lighter, even with the extra keyboard added, than my MacBook pro, as light as that is. It’s the difference between lugging around a larger bag like my AirPorter and just throwing my Muzetto over my shoulder, too.

Also, a number of cafes here in New York have a strict “No laptops” policy on weekends or at certain hours of the day. They have no such restrictions on iPads however, as silly as that sounds.[2]

Why the Canopy/Magic Keyboard?

I chose the Canopy from Studio Neat, combined with Apple’s Magic Keyboard, which is the same keyboard Apple includes with the iMac. There were a number of advantages in this setup for me compared to Apple’s Smart Keyboard, or any other iPad-specific solution I researched.

  • The Magic Keyboard just plain feels better. It’s not nearly as nice for me as my new MacBook Pro keyboard, which I love. The keys on the Magic Keyboard overall have a spongy and inaccurate feel in comparison. But the keys and layout are full-sized, which is a huge advantage over the Smart Keyboard (at least the one for the 10.5-inch iPad, which is what I have.) And they don’t feel like whatever it is those Smart Keyboard keys feel like.[3]
  • The Canopy folds into a nice compact package that fits into my bag with no issues, and yet is easy to leave behind when I don’t need it. I have a Smart Cover for my iPad, which I use occasionally to prop up the iPad to watch videos and such when I’m not typing, and to add an extra level of protection to my screen. I can keep the Smart Cover connected to the back of my iPad while using it in the Canopy.[4] And I still get screen protection when I don’t want to bring the keyboard along with me. The Smart Keyboard, because it is a combined screen cover/keyboard, leaves you with the extra weight of a keyboard at all times. Unless you want to swap between a Smart Keyboard and Smart Cover.
  • The Canopy is easier to open and close, at least for me, than the Smart Keyboard. And when closed it provides a nice layer of protection to my Magic Keyboard.
  • Because the Magic Keyboard is a regular Bluetooth keyboard, I have the option of using it with other devices, such as my iPhone, or even my MacBook Pro, if I want to.
  • As new iPads get released, assuming they continue to have Bluetooth, I can keep using the same keyboard. I could go back to the 12.9-inch for my next iPad, for instance, and not have to replace my keyboard. The Canopy accommodates any sized iPad, since it’s designed around the Magic Keyboard, not the iPad.
  • Being able to pick my iPad up out of the Canopy very easily and then place it back down allows me to switch between typing and more touch-based user interface operations much more easily. If I want to check Twitter, or navigate around in the Music, app, etc. I just pick up the iPad, work it as I normally would in my hands, then put it back down when I’m ready to start typing again.
  • A Canopy and a Magic Keyboard combined costs less than the Smart Keyboard.

There are, of course, some downsides to this setup versus Apple’s Smart Keyboard.

  • Battery—the Smart Keyboard uses the Smart Connector to get its power directly from the iPad. So you don’t have to worry about charging a separate battery for the keyboard itself. In practice, this has turned out to be mostly a non-issue, though, as I find the Magic Keyboard’s battery lasts months. So I just about never have to worry about running out of charge. I just set a reminder to recharge every five weeks or so (even though I don’t necessarily need it) just to be sure the keyboard always has some juice.
  • Bluetooth connection. Because it doesn’t connect to the Smart Connector, the Magic Keyboard needs to reconnect to the iPad. 90% of the time, I flip the on switch, tap the spacebar, and a few seconds later I’m connected. Sometimes it takes a little more effort to get it to connect. Not a big problem at all, really. But definitely not as nice as the Smart Keyboard’s instant connection.

Overall, I’m very happy with my choice to move most of my long-form typing to iPad. And I’m very pleased with the Canopy/Magic Keyboard combination. I resisted the notion of attaching a keyboard to an iPad for too long. I stand by my original opinion that for many, many tasks, iPad is much better as a slab of glass with no mechanical keyboard. Thus I still have no interest whatsoever in an iPad with an integrated, always-connected hardware keyboard for my own uses. I also have no desire to see iOS and macOS merge completely into some sort of combined touch/pointer Frankenstein.

However, designating my iPad as my main blogging device by optionally attaching a Magic Keyboard on occasion will help me fulfill one of my goals for 2018, which is to spend more time in iOS and to get even more use out of my iPad.


  1. Anything you would do while standing, for starters. Or, as I like to put it, if you have a job that involves a clipboard, it would be perfect for iPad, and likely terrible on a laptop. ↩︎

  2. I’m always happy to exploit a loophole in bad policy whenever I can. ↩︎

  3. I can’t even describe to you what the keys of the Smart Keyboard feel like to me. It’s not anything even remotely resembling a keyboard. And the space between the keys makes the target area of each key seem smaller than it should be as well. It’s just about the worst-feeling keyboard I’ve ever experienced, to my fingers. I have never been able to complete a single sentence with it without multiple typos. ↩︎

  4. I actually recommend the Smart Cover in conjunction with the Canopy, since the Canopy protects the keyboard, not your iPad screen, when folded up for travel. Also, the extra magnetic area connecting the iPad to the Smart Cover acts to prop the iPad up ever so slightly when placed on the Canopy. Which makes it just a bit easier to tap buttons at the bottom edge of the iPad screen, or to swipe up for multitasking when necessary. ↩︎

On HomePod

Let’s do a quick thought experiment.

You’re Apple. You want to launch a smart speaker product, but you haven’t gotten one into the market yet. Years have passed, and some of the competitors (Amazon and Google) are making some headway, though their products are far from mainstream. Those products are both backed by what has become pretty refined voice recognition systems, however, ones that surpass your own Siri in some respects, at least. And the people who do have these devices are pretty tied to the functionality they bring to the table.

What do you do?

Do you launch an also-ran box at a similar price point, with crap sound and inferior voice recognition? Knowing that you don’t have the data (thanks to your focus on user privacy) to be superior on services, or the access to easy ordering of replacement paper towls through Amazon’s global retail operation?

Or do you try and find another angle on which to compete?

I have no idea if Apple’s strategy of doubling-down on speaker quality will succeed, but I know trying to beat Amazon or Google at the voice stuff alone will fail. You have to play to your strengths.

Whatever got us here, this is Apple’s only play. Enter into the market riding on a reputation for quality of music (thanks to the iPod, Apple Music, etc) and bring Siri functionality along over the next couple of years as the user base grows. Given that the vast majority of people have never owned a smart speaker, I don’t think it’s a crazy proposition to sell a great-sounding speaker (a benefit everyone understands) from the company that brings you all of your music. Oh, and you can do some cool home automation stuff with it, too.

Who cares if the people who have Echos and Google Home devices want to keep them? There are far more people out there who currently have nothing in this category.

In order for Apple to win, Amazon and Google don’t have to lose, in other words.

Wearable Challenges

Well, it’s finally come to this.

airPodsWithSleeves

After more than a year of walking around New York with my AirPods, I finally gave in and bought these silicon hooks from EarBuddyz to keep the darn things from slipping out of optimal sound position, or worse, falling out of my ears altogether.[1]

I had remarked when I first got my AirPods that they were not exactly a snug fit. Over time, it seemed like they were actually getting looser, if that’s possible. My ear holes have gotten bigger over the past year, I suppose[2]. Factor in obstacles like scarves, hats, balaclavas, and other accoutrements of winter, and you can imagine a scenario where AirPod “incidents” were on a sharp rise over the past month.

Fortunately, these sleeves do their job, and now I can move around as much as I want without worry. The AirPods not only stay in my ears now; they also stay in the optimal position for sound, which is great. I can finally lie in bed on my side and my AirPods stay put, which is also great.

What’s not great? Having to take the sleeves off to put the AirPods back into the charging case. And then having to keep the sleeves somewhere safe while the AirPods are charging.

More importantly—the very notion that these sleeves are necessary is a big problem in my mind, because it speaks to Apple's design team overlooking basic variance in human anatomy. The experience of wearing these, being constantly reminded that my ears are bigger than what Apple considers “average” takes a bit of the sheen off an otherwise phenomenally executed product.

Lots of criticism has been lobbed Apple’s way in recent years regarding computers. I never pay much attention to these nitpicks. Apple knows how to make computers, and despite not being able to make absolutely everyone happy 100% of the time, the computer business is going to go just fine for Apple for many years to come. Tech is not Apple’s problem.

But if wearable technology becomes a huge part of Apple’s future—and I believe it will—Apple has some design challenges ahead of it. First and foremost, Apple needs to acknowledge that human bodies come in all shapes and sizes. So making one size of any wearable device is never going to fly.

Apple did a better job with Apple Watch, at least, offering two different body sizes and various watch straps that adjust to many different wrists. There are still some bands that I can’t quite get to fit perfectly, and I’m sure even more sizes would be welcome for some. But at least there’s an acknowledgement that not everyone has an equal-sized wrist.[3]

I’ve been hearing rumors that Apple’s next big wearable will be eyeglasses. Think Google Glass, but done by Apple, which could be a compelling product.

But then I think about the last time I went shopping for new glasses. I literally tried on fifty to sixty different frames before I found one I didn’t hate. Even with companies like Warby Parker allowing you to virtually “try on” several pairs, I have yet to find a pair in their collection that looks good on me.

I wonder just how it will be possible for Apple to mass produce glasses that are the right fit for the entire population. And how will they keep enough stock of all the various varieties in a retail store?

Nevermind what the technology can do. If it looks terrible or doesn’t fit properly, people are not going to wear it. And there’s no such thing as one or even ten pairs of glasses that would cover everyone comfortably. So either Apple needs to partner with frame makers to incorporate their technology into existing frames (which is doubtful, given Apple’s tendency to want to control the entire widget), make a product that somehow attaches to any frame (also doubtful, as it would likely be clunky), or be prepared to design and manufacture hundreds of frame designs and build each Apple Glass product to order.

Considering unique prescriptions, it becomes clear that Apple Glasses would have to be made to order for most people, anyway. Is Apple prepared to become experts in creating prescription lenses? Are they prepared for the extra complexities of customer service and regulations surrounding that industry?

To me, the entire tech sector needs to seriously rethink its approach if technology is meant to become wearable. This is new territory. Industrial design is starting to merge with fashion design in a big way.[4] Manufacturing bespoke wearables for the individual instead of one product that fits all is not something Apple or any other computer company has had to do before on a mass scale.

I hope Apple is hiring accordingly. This is a real opportunity to not just innovate in the tech space, but to bring Apple’s skills in mass production to other realms.

Maybe the first generation of AirPods was just a test balloon to see how sales went? If that’s the case, I think by now Apple can justify an expansion into new sizes. I’ll be watching with curiosity this year to see what develops. Getting a few different sizes of AirPods would be a good sign that Apple is getting more serious about wearables in general. And it would mean I could go back to not needing any hacks to enjoy listening to music again.


  1. I actually tried two other silicon sleeves first. The ones that just go over the buds and don’t provide the extra “hook” for the lobe, and the super-thin ones that fit inside the charging case. Neither kept the AirPods in position for me, so they ended up being failed experiments. ↩︎

  2. It is true that while our heads eventually stop growing, our ears continually get bigger as we age. I don’t know whether that applies to the ear canals, however, or just the outer ear. ↩︎

  3. Even my own wrist continuously expands and contracts throughout the course of an average day. I find myself adjusting loop bracelets two or three times daily. And my Link bracelet will go from snug in the morning to loose on my wrist in the evening often. ↩︎

  4. And the fashion world hasn’t even solved these sizing issues to anyone’s satisfaction. Don’t get me started on how “rack” clothes are cut vs. what actually fits most people. ↩︎

Clear and Verifiable

The way in which Aaron Sorkin describes Intention and Obstacle in the early bits of his Masterclass is so simple, so easy to comprehend, and most importantly, so easy to test [1]. In just a few minutes, he offers a simple way to detect whether there is an appropriate level of conflict in any story. You can apply this lesson to any screenplay or script that you read, including your own.

Sadly, that sort of simple clarity is a rare thing these days. I recall when I was getting into coffee, I would hear over and over again about the famous 30-second “bloom.” You wet the ground beans and wait about 30 seconds before pouring in more water. Every expert I read recommended it. The instructions for my Chemex recommended it. I even included it in my long blog post about coffee methodology last year, though I had to confess I had no idea why I was doing it or
whether it made any difference.

And then one day a barista at San Francisco’s Four Barrel Coffee explained it to me. He said carbon dioxide gets trapped in the beans during the roasting process, which gets released when the beans first come in contact with water. You don’t want carbon dioxide trapped in your coffee, because it contributes a sour taste. So you wait about 30 seconds until most of the carbon dioxide is released into the air, rather than into your liquid.

Boom. Simple. Easy to verify online. And easy to test myself with my own coffee, since I now knew what to be looking for (a sour taste) when this step was skipped.

Who knows? Maybe you want your coffee a little sour. In that case, go ahead and skip the bloom, or shorten it a bit. Once you know why you are waiting, you can manipulate that timing to your liking, rather than blindly following the rule for its own sake.

I had never met a barista who didn’t do the 30-second bloom. But I also hadn’t met one who could tell me why it was important until that day.

So much “common practice” in various fields is accepted without question. I think experts are often intentionally ambiguous, as if they are protecting the secrets of a magic trick. And yet, it’s so important for newcomers to learn the why, not just the what.

No matter the field, when you are teaching people new things, try to use clear language that allows the learner to repeat and test concepts in their own work. And encourage them to actually experiment. Ideally, you want your students to surpass your own abilities, and they can’t do that if they haven’t taken those lessons to the lab, so to speak.

You are a technician. And techniques are hard-won over years of practice. You shouldn’t expect others to blindly accept your “rules” just because they work for you.

Your students need to know the why behind the rules before they can start breaking them. Otherwise, they end up breaking rules in a desperate attempt to be different. And that doesn’t elevate anyone’s work.


  1. While I don’t worship at the altar of all things Sorkin, as some do, I have tremendous respect for his talent for words. And now I have a new-gained respect for his ability to articulate his process. ↩︎