Back to Contacts or: Correcting the Lasik Fade

January 06, 2017

So, this happened today:

After a decade plus of fantastic, laser (errr, lasik) powered vision, my eyes started losing focus last year1. It's not to bad, but it's been bugging me. Probably more than most given my former life as a photographer.

I finally made it to the eye doc today and walked out with a new prescription for contacts. You know the, "you never forget how to ride a bike," saying? Turns out, the same is true for putting contacts in.

Today wasn't nearly as profound as the first time I got glasses and discovered what leaves on trees look like. Still, it's nice to get that little extra bit of focus distance that was lacking2.


Footnotes

  1. This is not the near sightedness that's almost certainly on the way. My distance vision is what diminished.

  2. Worth pointing out that I have no complaints about my history with Lasik. Paying for the surgery was some of the best money I've ever spent. Before going in, I couldn't see the giant "E" that's on top of the typical eye chart. Afterward, my eyes were a little better than 20/20. Something I didn't even know was possible.


Thoughts on Rogue One

December 26, 2016

I saw Rogue One opening night and thoroughly enjoyed it. Instead of trying to write a single, coherent narrative about it, I'm just going to throw out a bunch of bullet points. Otherwise, this would never get finished.


### WARNING: Spoilers for "Rogue One: A Star Wars Story" below. (Obviously) ###


  • It's pretty strong overall. The last act was masterful.

  • The only thing I wish I'd known going in was that some humans would be CGI1. The initial appearance of each briefly pulled me out of the film. Being prepared for the Uncanny Valley2 would have let me get through it faster.

  • The in-joke references to the original films were well done and not too heavy handed (e.g. while the Blue Milk was there, the camera didn't dwell on it).

  • The "I've got a bad feeling…" line getting cut short was a touch of genius and perfectly timed.

  • I got pulled out of the film when the Rebels set off remote explosives killing Stormtroopers. Instead of The (evil) Empire, I just saw them as soldiers in uniform. Perspective flipped, and all I could think about was uniformed U.S. soldiers getting caught in the explosions. (There's more worth thinking about here.)

  • I kept thinking, "Why the Hell didn't they iron Orson Krennic's cape?". Maybe it's just really hard to have a white cape like that that looks good on film. I can't imagine the look wasn't intentional. It just didn't read well.

  • I managed to avoid all trailers and just about all production stills prior to seeing the film. This paid off several times. For example, take this image from the teaser trailer:

    Since I never saw the trailers, I had no idea AT-ATs would get involved at the beach. When they first show up, I had a wonderful "Oh, Shit!" moment that would have otherwise been lost. Similar moments occurred throughout. I got to delight is seeing scenes for the first time in their intended context.

  • Another thing about skipping the trailers: I wasn't mislead as to why the film is called Rogue One. The trailers make it seem like Jyn and crew go off on a sanctioned Rebel mission. When it became apparent they were going against orders, the dissonance would have taken me out of the film. I'm glad that didn't happen.

  • The photoshopping on the movie poster makes Jyn Erso look significantly younger. It almost feels like an image from an earlier film.

  • Orson "Mr. White Cape" Krennic felt a bit like an over-the-top villain from the 1970s. Everything in the film had subtleties associated modern films except him.

  • In small ways, it felt like the first two acts were edited by a committee. Several folks all making sure their pet idea made it into the film. While it held up, I'd love to see a more refined edit. (No need to touch the last act though.)

  • The quote "Many Bothans died to bring us this information" popped into my head as Rebels started getting gunned down on the beach. This lead to another "Oh, Shit!" Realizing most of them were going to get wiped out3.

  • Lots of credit to whoever came up with the lower power, "single reactor" Death Star blast to solve for letting it shoot in Rogue One without conflicting with the test firing on Alderaan in A New Hope.

  • I'm so glad they didn't kiss at the end4.

  • It strikes me as weirdly morose to sell toys of characters who are introduced and die in the same film.

  • I wonder if Jyn will become a Disney Princess despite the fact that we saw her die.

  • I hope the success of Rogue One opens up more movies from the Star Wars universe. I can't imagine it won't. Disney is a business. Buying the rights to the Star Wars franchise is an investment. They'll do their best to make a good return on it and making new movies is the natural way to go about that5.


Footnotes

  1. Worth pointing out, I wouldn't have wanted to know who was CGI, just that some people would be.

  2. The Uncanny Valley is when things are made to look as human as possible but miss and we get weirded out by it.

  3. Yes, I realized later the quote was about the plans for the other Death Star and those weren't Bothans in Rogue One. That doesn't diminish the feeling I had in the moment and the compounded feeling when the shock wave from the Death Star hit.

  4. I'm guessing there were lots of meetings with big shots at Disney and tons of pressure to have them kiss. Good on whoever made the final call to keep that from happening.

  5. Doing the Thrawn trilogy would be awesome, but it sounds like licensing will complicate that matter.


Blue Angels in Black and White

December 17, 2016

The sky was overcast with a helping of haze during the 2016 Jacksonville Air Show. Color images look dull without a nice blue sky. Black-and-white (done right) doesn't have that problem1.

For example, here's the Blue Angels in Black and White2.












Footnotes

  1. I prefer working in black-and-white anyways. Something working on these images reinforced yet again.

  2. These images are being called by a responsive images loader I'm working on. It's still a work in progress. If you see something weird, or no images at all, please let me know. If this tech gibberish doesn't make any sense to you, you can safely ignore it.


XML Schema Snippet Tester

November 27, 2016

This short Ruby script is an attempt to reduce the pain of working in XML Schema. The idea is to carve out individual snippets and hammer on them in isolation1. It also makes it easy to verify XML that should be flagged as invalid doesn't sneak past2.

There's no need to use it with simple schemas. It's when working on complicated bits (e.g. trying to build a crazy restriction scheme for an attribute) that it's most useful.

Here's the script:

#!/usr/bin/env ruby

require "minitest"
require "minitest/rg"
require "nokogiri"

class Validator

  attr_reader :errors, :xml, :xsd

  def load_schema path
    @xsd = Nokogiri::XML::Schema(File.read(path).strip)
  end

  def load_xml string
    @xml = Nokogiri::XML(string)
    @errors = xsd.validate(xml).to_a
  end

  def is_valid
    errors.each { |error| puts error } # Debugging output
    errors.empty? 
  end
  
end

class ValidatorTest < MiniTest::Test

  attr_reader :v

  def setup
    @v = Validator.new
    v.load_schema('schema.xsd')
  end

  def test_valid_sample
    v.load_xml('<testNode key1="a"/>')
    assert v.is_valid
  end

  def test_sample_with_invalid_attribute
    v.load_xml('<testNode key1="bad_value"/>')
    assert_match(
      /The value 'bad_value' is not an element of the set/, 
      v.errors[0].to_s 
    )
  end
end

MiniTest.run

And a few notes about it:

  • The Validator class provides the main functionality. It doesn't need to change unless new functionality is desired.

  • ValidatorTest is what gets edited and where tests are defined.

  • v.load_schema expects a path3 to the schema detailing the snippet to test. In this case, it points to a schema.xsd file in the same directory. A copy of the schema used for this example is further below.

  • Individual tests are defined using MiniTest's standard test_ prefix methods.

  • Individual XML snippets to test are sent via v.load_xml4.

  • The assert v.is_valid call is used for examples that should pass.

  • The errors.each { |error| puts error } in is_valid provides debugging output when working on changes that cause a failure. Capturing them makes it easy to pull the messages to use when looking for expected failure cases.

  • Validation errors are stored in an errors array. Using assert_match with part of the validation error string ensures errors occur where expected.

  • The full error string in the test for invalid data is Element 'testNode', attribute 'key1': [facet 'enumeration'] The value 'bad_value' is not an element of the set {'a', 'b'}. To help decouple the test a little, it only matches against the The value 'bad_value' is not an element of the set.

Here's the example schema.xsd:

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">

  <xs:element name="testNode" type="testNode"/>

  <xs:complexType name="testNode">
    <xs:attribute name="key1" type="_value_list" use="required"/>
  </xs:complexType>

  <xs:simpleType name="_value_list">
    <xs:restriction base="xs:string">
      <xs:enumeration value="a"/>
      <xs:enumeration value="b"/>
    </xs:restriction>
  </xs:simpleType>

</xs:schema>

While there's not a lot to the setup, it's greatly reduced the amount of time I spend banging my head against the XML Schema wall.


Footnotes

  1. Of course, the script works fine with full schemas and XML documents too.

  2. Making sure data that should be invalid doesn't pass was the biggest driver for making this script. I use Oxygen XML Editor. It makes verifying valid files easy but doesn't appear to have a good way to check for false negatives (i.e. data you expect to fail that ends up passing validation).

  3. Keeping the actual schema snippet in it's own file is my preferred way to work. Of course, it's possible to modify the script to include the schema directly as a string.

  4. As with the schemas, it's possible to modify the script to use actual XML files instead. It's generally adds more overhead than it's worth.


Embedding a Test Suite in a Single-file Ruby App (Part 1)

May 22, 2016

"You only write code because you expect it to get executed. If you expect it to get executed, you ought to know that it works. The only way to know this is to test it."

– Robert "Uncle Bob" Martin1


Test Driven Development2 has become the foundation of my coding practice. Knowing that, under it all, I can have math3 actively and automatically proving my code works4 has become so fundamental that I'm reluctant to do anything without it. That reluctance extends all the way to simple, single-file apps5.

Testing generally involves splitting code into two files:

  1. Code that performs a task
  2. Code to test the Code that performs a task

Most projects contain lots of files with application and testing code, documentation, supporting assets, etc… Separating testing concerns into multiple, separate files not only works, it's desirable. Unfortunately, it's completely at odds when the goal is a self-contained, single-file tool. I've been struggling with this a lot. Regularly falling back to manually testing7 instead of creating automated ones that would lead to a second file.

After some experimentation, I'm happy to present a nice solution for packing a test suite directly into the same file as the main application code.

The key: Don't worry about separating test execution from actual execution. Just run the test suite every time the app is started.

Here's an example [filename: drink-example.rb]:

#!/usr/bin/env ruby

require 'minitest'
require 'minitest/rg'


class Drink                                    # The Code to Test
  attr_reader :type
  
  def initialize
    @type = "water"
  end
  
  def describe_type
    puts "This is a drink of #{type}."
  end
end


class DrinkTest < MiniTest::Test               # The Test Suite
  def test_that_the_drink_is_water
    drink = Drink.new
    assert_equal "water", drink.type
  end
end


if MiniTest.run                                # The Run/Kill Switch
  puts "Tests Passed! Process can proceed."
  drink = Drink.new
  drink.describe_type
else
  puts "Tests Failed! Drink *is not* safe!"
  puts "-- No process run --"
end

The Drink and DrinkTest classes are standard Ruby and MiniTest8 fare. The MiniTest.run conditional at the end provides the magic. Running the file with ruby drink-example.rb kicks off MiniTest from there. If all the tests pass, the app gets on with its actual business.

Here's what that looks like:

$ ruby drink-example.rb
Run options: --seed 39971

# Running:

.

Finished in 0.000739s, 1352.7535 runs/s, 1352.7535 assertions/s.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips

Tests Passed! Process can proceed.
This is a drink of water.

If MiniTest finds a problem it returns false. This triggers the else block which contains only an error message. The app shuts down gracefully without attempting to do potentially dangers operations in its unstable state.

For example, changing @type = "water" to @type = "poison" in the Drink class produces:

$ ruby drink-example.rb
Run options: --seed 44252

# Running:

F

Finished in 0.001335s, 749.1551 runs/s, 749.1551 assertions/s.

  1) Failure:
DrinkTest#test_that_the_drink_is_water [drink-example.rb:16]:
Expected: "water"
  Actual: "poison"

1 runs, 1 assertions, 1 failures, 0 errors, 0 skips

Tests Failed! Drink *is not* safe!
-- No process run --

So, not only does this approach keep everything in one file, it also does a TDD sanity check before each and every run.

I love everything about that.


I'll show more detailed examples of how I use this approach in Part 2.

Footnotes

  1. From Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin. A book that'll rank high when I make my list of recommended reads for other coders.

  2. Test Driven Development still feels like a quantum leap in my ability to make things. I recently finished a mad-dash migration project using languages and systems I wasn't really familiar with. While it's some of the least efficient code I've ever written, it has three things going for it. First, we launched on time. Second, everything worked. Neither would have been possible without the test suite I built as my first step and used throughout the migration. And third (saving the best for last), with the test suite as my backstop, I'm now removing all the cruft carried over from the migration while being confident the system still works as expected.

  3. That's right, math. Because it's all ones and zeros inside the machine and every test case boils down to a "1" if everything worked as expected and a "0" if it didn't.

  4. Critical point: "works" in this context means only that the code is responding in a way the test case expects. There are a host of reasons (like testing the wrong thing) that it may not be doing what's actually desired even if all the tests pass.

  5. After building web pages, writing small, self-contained Perl scripts is how I got started coding. While it's been a couple decades and I've moved on to Ruby, the power of small, custom tools that fit in a single file still amazes me. At any given time, I've got 50 or more floating around6 that get varying degrees of use. Some only last an hour. Some have been around for years.

  6. I use Code Runner to house these apps. Makes it super easy to jump to and run any one of them in the blink of an eye. (It bugs out from time to time. Though, not enough to warrant looking for a replacement.)

  7. Testing by hand was all I used to know. Trying to imagine going back to that makes me wonder how I got anything done. While I have no real complaints about my coding journey so far, learning how to build automated tests from the start is one thing I'd absolutely change if I could go back in time.

  8. The initial tutorials I went through to learn Ruby used RSpec for testing. While I can see some of the appeal, I was happy when I found MiniTest. It makes more sense to my brain and has less overhead since it doesn't require learning a Domain Specific Language (which slowed my overall learning progress considerably).


An XML Schema (XSD) Definition to Prevent Leading Zeros in Integers

March 10, 2016

The XML Schema specification1 provides several handy data types. For example, xs:positiveInteger2 produces "the standard mathematical concept of the positive integer numbers."

Well, mostly.

There's a hidden gotcha in positiveInteger. It allows an anathema. Leading zeros.

For example, given the definition:

<xs:element name="node">
  <xs:complexType>
    <xs:attribute name="number" type="xs:positiveInteger"/>
  </xs:complexType>
</xs:element>

These are all valid3:

<node number="1"/>
<node number="100"/>
<node number="8675309"/>
<node number="007"/>

That last one can cause all kinds of havoc.

When leading zeros are involved in data feeds, they have to be treated as either a string (to maintain the zeros) or converted into an actual integer. Given a system of any size or longevity, the likelihood of different processes making opposing choices approaches 100%. Subtle, super-annoying bugs are born. Ones that take a surprisingly large amount of time to fix4.

Thankfully, XML Schema is robust enough that we can define data types that prohibit leading zeros. For example:

<xs:simpleType name="_positive_integer_without_leading_zeros">
  <xs:restriction base="xs:positiveInteger">
    <xs:pattern value="[123456789]\d*"/>
  </xs:restriction>
</xs:simpleType>

<xs:element name="node">
  <xs:complexType>
    <xs:attribute name="number" type="_positive_integer_without_leading_zeros"/>
  </xs:complexType>
</xs:element>

This works by using a Regular Expression5 to enforce the data format. It's a little easier to understand by breaking pattern's value into two parts. First:

[123456789]

Anything inside square brackets identifies possible values for a single character. So, [123456789] at the start of the pattern value means the first character must be either: 1, 2, 3, 4, 5, 6, 7, 8, or 9. The lack of zero means anything starting with a "0" won't match and will therefore be rejected as invalid.

The second part of the pattern is:

\d*

A \d (without the *) tells the pattern matcher to look for any single digit. If it was by itself, it would mean there always has to be a second character and that character must be a digit. The * modifies \d to allow "zero or more" digits.

If the data being matched is a single character, the \d* has no real effect. If there are two or more characters, it enforces the restriction that every character from the second until the end must be a digit. Unlike the earlier [123456789], the \d pattern includes all possible digits, including zero.

Combined, the [123456789]\d* pattern produces the desired behavior. These actual integers all pass the validation test:

<node number="1"/>
<node number="100"/>
<node number="8675309"/>

But this one's accurately rejected as invalid:

<node number="007"/>

This little snippet is now safe from those pesky little leading zeros sneaking in.

Count yourself lucky if you've never had to deal with leading zeros. If you want to avoid them in future XML use this type of custom data type instead of xs:positiveInteger.


Notes

  1. This technique works equally well in XML Schema 1.0 and 1.1.

  2. The xs:positiveInteger data type allows "+" at the start of the number (e.g. "+8675309"). The definition above doesn't. I built it to deal with unique IDs from a database. None of which contain the "+". Changing the pattern value to \+?[123456789]\d* would accommodate the plus if you need it. Other variations are left as exercises for the reader.

Footnotes

  1. Official XML Schema Documentation

  2. xs:positiveInteger details

  3. While not all validators find the same things, the "007" string was a valid xs:positiveInteger_ value in Saxon-EE 9.6.0.5, LIBXML, and Xerces running in oXygen XML Editor. Speaking of which, if you do any XML work at all and don't know about oXygen, you should check it out. It's expensive but totally worth it.

  4. This doesn't even begin to get into what happens when everyone agrees that strings are the way to go but you run out of numbers and need to add a new zero to the front.

  5. Regular Expressions - a sequence of characters that define a search pattern - the heart and soul of text processing for an old Perl coder like me.


Protection from Adobe Creative Cloud's Folder Erasing Bug

February 12, 2016


Preface: This post tells you to run commands in the Terminal on your Mac. It's a powerful way to tell a computer what to do and well worth learning a little bit about. However, blindly following these type of directions from unknown folks on the Internet can be dangerous. Sneaky folks can trick you into installing viruses/spyware and other bad things. Always ask a tech-buddy you trust to look at anything like this before you follow the directions to run or install it. (Especially if you see the word sudo which has Godlike abilities on Macs.)


Update: Good news, everybody. Further reports indicate the bug doesn't delete the folder, just the contents. So, all that needs to be done is to make a protection folder one time. That can be done with:

sudo mkdir /.aaaaaaProtectionFromAdobeCC

Since the folder itself isn't deleted, there's no need to go through the hassle of the rest of the stuff below.


A February 2016 update from Adobe Creative Cloud is deleting the first folder it finds alphabetically on Macs.

This is bad. It's breaking things like Backblaze's backup service.

Until it's fixed, the safest thing to do is create an empty, throw-away folder that it'll see. Creative Cloud will kill it while leaving the stuff that makes your Mac actually run alone. And, because there are reports of it happening multiple times, you'll want to setup to automatically recreate it.

I created a script that will make the folder then check to make sure it stays there. To install it, copy and paste the lines below into your Terminal application (hit "Return/Enter" after each one to run them).

  1. This line downloads the script file and puts it in a folder that Mac's use for setting up automation:

    sudo curl -s -o "/Library/LaunchAgents/com.alanwsmith.adobeCreativeCloudProtection.plist" "http://alanwsmith.com/com.alanwsmith.adobeCreativeCloudProtection.plist"
  2. The script will automatically start if the Mac is rebooted because it's in the /Library/LaunchAgents folder. To start it without rebooting, run this:

    sudo launchctl load "/Library/LaunchAgents/com.alanwsmith.adobeCreativeCloudProtection.plist"

That should protect you until Adobe corrects the behavior.

Here's a video on how to open the Terminal if you need help with that. You'll also need to use an Admin account and enter your password after running the first command. Finally, these lines are long and some will scroll. Be sure to copy the entire thing.


To remove the script after Adobe gets their side fixed, run these three lines in the Terminal to turn off the script:

  1. This stops the script from running:

    sudo launchctl unload /Library/LaunchAgents/com.alanwsmith.adobeCreativeCloudProtection.plist
  2. This deletes the file (so it doesn't start again next time you reboot):

    sudo rm /Library/LaunchAgents/com.alanwsmith.adobeCreativeCloudProtection.plist
  3. And this removes the throw-away folder that provided the protection.

    sudo rmdir /.aaaaaaProtectionFromAdobeCC

(Note: You'll need to use an Admin account and enter your password with these too.)

Software development is hard. Adobe's software is incredibly complex. Sure this sucks, but it's worth keeping that in mind before blasting Adobe. The real tests are how quickly they respond and if this same thing ever happens again.


Convert a Ruby Array into the Keys of a New Hash

December 02, 2015

The need to migrate an array into a hash crops up on occasion. The simplest approach is to turn each array item into a hash key pointing at an empty value. A situation where the Ruby Array object's .collect method works great. For example:

hash = Hash[array.collect { |item| [item, ""] } ]

Fleshing it out a bit more, here's a full demo showing it in action:

#!/usr/bin/env ruby

require 'pp'

array = %w(cat hat bat mat)
hash = Hash[array.collect { |item| [item, ""] } ]

pp array
pp hash

which produces the output showing the original array and then the hash with the desired structure:

["cat", "hat", "bat", "mat"]
{"cat"=>"", "hat"=>"", "bat"=>"", "mat"=>""}

Of course, the processing block can assign values as well. For example, changing the above example to use:

hash = Hash[array.collect { |item| [item, item.upcase] } ]

would produce the hash with:

{"cat"=>"CAT", "hat"=>"HAT", "bat"=>"BAT", "mat"=>"MAT"}

Good stuff.


P.S. Let me know if you have a simpler way to turn ["cat", "hat", "bat", "mat"] into {"cat"=>"", "hat"=>"", "bat"=>"", "mat"=>""}.


Go To Index Page: