Bitcoin maximalist.

Thoughts & Technical Writings.

Responses to Bitcoin’s Diversity of Use-Cases and Security Models

| Comments

Original post: http://bluematt.bitcoin.ninja/2017/02/28/bitcoin-trustlessness/

While there is arguably some trust in miners required to ensure the entirety of the blockchain isn’t reorganized

Why are you singling out miners? The 51% attack vector is neither exclusive to nor more likely to be pursued by miners. Anyone can incur the PoW cost to mine an alternative version of history (alternative blockchain) and propagate those blocks to nodes. In fact, I would argue that miners are less likely than non-miners to pursue a 51% attack since they are in an extremely competitive business.

Of course to ensure you aren’t trusting miners and pools to secure their operations perfectly

What does this mean? If your point is that one should wait for n confirmations before assuming your transaction is valid, that is incorrect. Miners have no role in validating transactions (full nodes do that); they just choose which already-validated transactions should be put in the next block. If they try to put invalid transactions in a block, then that block will be rejected and their mining reward foregone.

Clearly a payment system which requires a week or more for payment to clear would not be able to compete with much faster alternatives.

This is not an apples to apples comparison. Many people seem to want to compare Bitcoin to Visa, but that is a trap. After SegWit is activated, Bitcoin will enable a trustless, instantaneous and secure peer-to-peer payment network like Lightning, but Bitcoin itself will never be that payment network by design. Acheiving decentralized consensus via PoW and fully-validating nodes requires a massive amount of redundant messaging to propagate throughout the network, which is not a design that lends itself to high throughput for the network at large.

Many investors who care strongly about Bitcoin’s scarcity properties are happy to trust centralized third-parties in the form of Bitcoin exchanges and “Bitcoin banks”.

That trust is misplaced. They should not keep coins at an exchange (see: all the exchange hacks) and should certainly not get anywhere near something called a “Bitcoin bank” since in Bitcoin users are their own bank. Bitcoin is all about financial disintermediation.

Many Bitcoin users who want fast payments for medium- to small-value transactions are happy to trust miners, in sufficient measure.

In what way are they trusting miners? Again, miners do not validate transactions, they just decide which transactions to put in the next block.

Such trust relationships, as long as users aren’t forced into them (either by explicit requirement or sufficiently strong financial incentive), can provide significantly better user experience through faster, cheaper, and more user-friendly transactions.

Aren’t you saying that users choose such a trusted relationship? One would think they do this because of a strong financial incentive, e.g. 0 confirmations at one time was considering a value-add feature for payment processors to attract users, but the payment processor would have to trust the user to not double spend. Can you give an example of what you’re describing in this paragraph?

(eg trusting miners or developers to keep the 21 million Bitcoin limit)

Neither of these make sense to me. Developers do not own the monopology on total number of Bitcoins in circulation, clearly. What does ‘trusting miners’ mean?

Bitcoin must only change by consensus of its ever-growing userbase.

Bitcoin is consensus, and that consensus is determined by the software being run primarily by fully-validating nodes, and additionaly to some extent by miners & wallets/clients which relay new transactions to the network. Bitcoin does not care about whether or not people agree or not.

Putting all of this together we see a picture of where Bitcoin must evolve if it wants to retain its trustless properties while providing a usable system for its many, vastly divergent, use-cases.

That step in the evolutino is already here and it’s called SegWit. So, run a SegWit full node and help us move towards that vision.

or even a trusted “Bitcoin bank”.

This is not one of the divergent use cases you mention that are relevant or specific to Bitcoin. If you want to trust a 3rd party, you don’t need (and shouldn’t use) Bitcoin.

Users which do not even want to trust miners should be free to do so, placing their transactions on the blockchain and waiting weeks to ensure even future hashpower attacks will not reverse them

What? I’m lost. By “placing their transactions on the blockchain” are you talking about running one’s own mining equipment? And, again, you aren’t trusting miners. Miners expend energy to perform proof-of-work; an attacker trying to double spend needs to redo all of the work beteween (a) the block that includes the double spend transaction(s), and (b) the most recent block in the chain.

the community of Bitcoin users must continue to enforce that changes happen only through consensus among the ever-broadening group

What community? Whom are you addressing? Bitcoin is about software running on machines, not people’s opinions about what is good or bad for Bitcoin.

Critically, this means that all changes which do not harm the utility of Bitcoin for any of its many use-cases, while helping others, should be made, wherever possible.

This is the exact thing people are disagreeing about currently vis-a-vis SegWit. And since SegWit is an opt-in backwards compatible soft fork that preserves the trustless, decentralized model, it should be activated.

Bitcoin the Freedom Fighter

| Comments

Bitcoin represents a mental stepping stone, a philosophical spring board from which I’m better able to envision a future not controlled by the State and its numerous appendages (the FBI, CIA) and countless subsidiaries. By breaking the link between money and sovereignty (there are neither Presidents nor sovereigns printed on bitcoins), Bitcoin paves the way for a tidal wave of cultural, social, and technological innovation and progress. Since Bitcoin offers an Internet of Money with no inherent overlap with existing (i.e. fiat) monetary and financial systems, over time as adoption rises, people will be motivated to keep an ever-increasing proportion of their value in Bitcoins because it is better money. This last point is not debatable; it is not an opinion, it is not fake news, it is fact. Fiat money is not yours (aside from cash; but what portion of your value do you hold in cash?) - it is merely an IOU, a ledger balance. When you log into your online banking (assuming you live in one of the advanced economies where we have access to such financial services), the money that you see and claim to “have in your account” has in reality been lent out or applied to some probably more risky strategy like trading or financing of some exotic, complicated deal loaded with tantalizing fees for the bankers involved. If - or rather, when, given the history of financial crises, which seem to be part of the human economic condition - there is a run on the bank and the FDIC, you will be forced to get in line with all the rest (and probably take a massive haircut, i.e you will lose most or all of the money held in bank accounts).

This is simply not the case with Bitcoin. You are your bank. Bitcoin is strictly a bearer asset, where possession of the private key that allows your coins to be spent is ten-tenths of the law, so to speak.

But, bitcoins are not very useful today. I have to wait at least 20 minutes for my transactions to be confirmed; transaction fees are too high, I pay nothing to use my credit cards. Why would anyone ever use Bitcoin?

Ask the people in Africa, South America, East Asia, and so on, where each morning you wake up and hope that your grab bag of problems created by Big Brother doesn’t have a surprise waiting for you. That list includes things like profound corruption & graft, the social strangehold wraught by nepotism at all levels of socieyt, hyperinflation of your currency that destroys your purchasing power on a daily basis, oppressive political regimes which rob & pillage the people, and so on.

If, 10 years ago before Bitcoin was invented, someone has asked you the following question, how would you have responded?

How long would it take to design an Internet of Money which is not controlled by any central arbiter, no government, no corporation; a freely accessible network that cuts out the banks and turns the global financial-monetary system on its head and renders it utterly useless?

I suspect you’d either laugh, stare, or walk away, thinking something like, “Dreamers, who needs ‘em.” We all would have. Bitcoin has existed for 8 short years and it’s already significantly better than SWIFT, ACH, and most bank transfer services around the world.

In 10 years, maybe 5, all the “problems” that exist today with Bitcoin will be solved. Lightning Network is coming, SegWit will be adopted, and the impact of Bitcoin’s open, permissionsless protocol allowing for exponential innovation will render all the world’s fiat currencies laughably, hopelessly obsolete with no shot of ever catching up. From there, the virtuous cycle of utility begetting adoption ad infinitum will kick in, and then it’s simply game over for the incumbents.

So, what happens to things like taxes when, in the future, everyone is storing their value in Bitcoin, where it is easy to hide it from the prying eyes of the State and even your spouse? It will trigger a forced revolution, a painful, messy, chaotic breaking with the past - a cultural & social forest fire which will wipe out the old guard and the incumbents, anyone standing in the way of progress, and will give the future over to the nerds and to the Individual. Power structures like governments will no longer be able to domineer over their “subjects” by dictating and controlling what they can (buy things at Wal-Mart, please!) and cannot (that politician is evil, and since you donated to her you must be, too!) do with your money. The State and any centralized entity will only be able to grow as large as the exact number of Individuals which explicitly “vote” to support that structure, where voting is represented as financial support - buying a product from a company, paying for a service, donations, or even paying taxes if you’d like.

The future is bright for the Individual, but the transition will be difficult. And we really haven’t even begun yet. Remember: all exponential relationships appear to be linear at the beginning. I hope I live long enough to see the revolution in full swing.

Bitcoin and Taxes (Your Favorite Topic!)

| Comments

“Each $5,000 of annual tax payments made over a 40-year period reduces your net worth by $2.2 million assuming a 10% annual return on your investments,” reports James Dale Davidson in The Sovereign Individual: Mastering the Transition to the Information Age

Just let that sink in for a bit please. Mull over the figures in your delicate psyche. Imagine the impact of $2.2mm on your future.

… now, who’s ready to talk about taxes?!

Is Bitcoin useful for tax evasion?

(NOTE: This is not tax advice, nor am I suggesting you seek to evade paying taxes. This post is for entertainment purposes only.)

There are two types of taxation regimes the world over: You either live in a country with competent or incompetent tax authorities.

If incompetent, there are a dozen other ways to avoid paying taxes, probably all of them easier than Bitcoin.

The United States is certainly one of the more competent tax collectors in the world via its ‘revenue’ (talk about a euphemism) arm, the IRS.

So our question is really: is it possible to use bitcoin for tax evasion within the US?

What can the IRS do, exactly?

As of today, the IRS simply hopes that you report any capital gains on profits obtained on net BTC purchases. However, it’s clear the IRS is trying to move from ‘hope’ to ‘coerce’ as its policy verb of choice: they’ve just asked for a broad and sweeping disclosure from Coinbase regarding its customers’ historical transaction data. Should this data be handed over to the Feds, you can bet they will start building a ledger for each customer - purchases made (volume and price) net of sales (again, volume and price) and send you a capital gains bill for any PNL realized on those transactions.

However, this doesn’t address situations in which you’ve used Coinbase (or another exchange like Gemini) solely to purchase your coins, and immediately withdrawn them to your personal, private wallet.

What will the IRS’ policy be regarding these murky situations? Remember, I’m saying ‘murky’ here because they do not know the fate of your withdrawn bitcoins. That means that you could have:

  • lost your coins -> you’d claim a capital loss in this case, since the capital you outlaid upfront is now worth exactly $0
  • held onto your coins -> you’d report nothing as you only incur capital gains upon sale of your investment
  • sold your coins -> you’re supposed to report this honestly and pay capital gains

As you can see, the main issue here for the IRS is their total and utter lack of transparency. Bitcoin does not need the IRS’ nor the Federal government’s blessing to exist - it just does. It is a completely separate, autonomous payment network that is under the control of no single individual or entity. There is no one rubber-stamping transactions like the Federal Reserve does in this country.

Some open questions, however, remain regarding the extent to which the IRS could pursue an individual (i.e. audit them) if they suspect misreporting of capital gains/losses regarding cryptocurrency holdings. Will they conduct rigorous blockchain analysis - which is extremely computationally expensive (by design, in a way… more on this in another post perhaps) - to try and piece together a story and send you a tax bill? This strategy is feasible in theory, but would represent a huge consumption of their resources in practice.

The fact is that the tools at the disposal of the IRS are limited at best.

Perhaps, though, this is for the better.

A better tax regime

In bitcoin, you pay taxes on every transaction - it’s called the transaction fee. You can voluntarily pay a higher fee and you will increase the likelihood that your (probably time-sensitive) transaction will be part of the network’s next block, i.e. your transaction will be settled in a much more timely manner. This is an example of a voluntary tax system (credit to @TravisPatron for exposing me to this concept).

[The Bitcoin] transactions queue represents a voluntary, pay-for-performance taxation structure where the performance derived from the system is dependent upon how much taxation [a user is willing to] pay.

Bitcoin puts the power back in the hands of its users. If you value the service, the right to use the Bitcoin network to move value between two nodes in the network, then you pay tax for that performance.

Imagine a system where you pay tax for roads only when you use the roads to drive. Or you only pay taxes to schools in the form of some tuition cost when your children are attending those schools.

I think that a tax regime that aligns a person’s tax liabilities with his or her behaviors (where a behavior is an expression of their preference) is a compelling vision for the future. Perhaps you will agree.

WorldWideWebCoin: A Thought Experiment

| Comments

WorldWideWebCoin - WWWCoin vs. Bitcoin - More usefulness leads to more adoption leads to more usefulness

Bitcoin is many things - open source peer-to-peer (p2p) digital money; an alternative asset class for investors; and possibly the global reserve currency of the future, when machines are participating in marketplaces autonomously without human intervention.

The value of one Bitcoin (1 BTC) in essence is a reflection of its usefulness across the many functions in which it’s being used.

Bitcoin starts to look more like a safe haven asset that’s decorrelated with, say, the S&P500? Its value should increase.

Bitcoin can be used more and more places to buy goods and services? Its value should increase.

More companies offer employees compensation partially of wholly in Bitcoin? Its value should increase.

WWWCoin Hypothetical Price History

Let’s say there had been a WWWCoin back in 1994 shortly after Tim Burners-Lee invented the World Wide Web in the late 1980s.

Now picture a chart of that coin’s valuation from its inception in c. 1989; through the dot-com bubble’s bursting beginning in 2000; and finally bring it through to the present. What would that chart look like? Here’s my take…

Bitcoin Realized Price History

Just for fun, let’s see how our hypothetical WWWCoin chart compares to the realized BTC price chart to date.

Some similarities of note:

  • Early precipitous rise leads to a bubble, i.e. overdone price increase relative to the actual + potential usefulness of both the internet and Bitcoin
  • Steady, grinding recovery following the first major crash in price as the technology continues to make strides in terms of usefulness and applicability
  • No clear upper bound on price as usefulness enters into a virtuous feedback loop - more adoption means more usefulness means more adoption, ad nauseam.

The last point has only been realized to date in the case of the Internet; my prediction is Bitcoin will follow a similar pattern. Have you heard of micropayments? The technology that will finally kill YouTube ads? Well, they’re almost here, courtesy of Bitcoin and the Lightning Network.

I think we’re standing on the precipice of an inflection point in Bitcoin’s history - at which the feedback between Bitcoin’s expanding utility and increasing price becomes obvious to just about everyone. Do you want to participate or not?

Private Methods Can Save Some TDD Headache

| Comments

Let’s face it: TDD is hard. I would argue that there’s value in that difficulty - TDD forces you to address domain modeling decisions early on; to declare your API before implementing it. I have spent more time than I care to admit staring at some *_spec.rb file, unsure about how to translate the loosely-defined conception of behavior in my brain into test examples.

However, there’s good news: relying on private methods can some (some) of this pain.

A Rails Example

Let’s say your co-worker has asked you to write a webhooks microservice. This standalone API will be responsible for:

  1. Creating new callback subscriptions for users that want to subscribe
  2. Sending the callback payloads to said user’s callback(s) URL(s)

‘A crucial part of any callbacks API is delivering, over the web via HTTP POST, the actual callback payload,’ you reason aloud.

To that end, you conjure up a vision of some DeliverCallbackJob class that needs only a User instance and the callback payload (probably a long string of JSON, like a serialized web response). Following a common pattern, you decide the interface for this class will be limited to one method:

1
DeliverCallbackJob#perform!

In adherence to TDD, we may start with an example (RSpec DSL here) that looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
RSpec.describe DeliverCallbackJob do
  # NOTE: some/much setup code omitted...

  let(:deliverer) { DeliverCallbackJob.new(user, payload) }

  describe "#perform!" do
    subject { deliverer.perform! }

    it "sends a callback to all of the user's associated Callback#callback_url endpoints" do
      expect(Net::HTTP).to receive(:post).exactly(user.callbacks.size).times
      subject
    end
  end
end

Now, User#callbacks is a standard ActiveRecord::Associations::ClassMethods#has_many association. So we may be tempted to now drop into our class definition and start implementing:

1
2
3
4
5
6
7
8
9
#NOTE: This is pseudo code! Read between the lines somewhat please :)

class DeliverCallbackJob
  def perform!
    user.callbacks.each do |callback|
      Net::HTTP.post(callback.callback_url)
    end
  end
end

The Problem

This works, yes - but we’ve unwittingly coupled logic unrelated to delivering a callback to this class. Specifically (and this can be hard to see in Rails a lot of the times), the usage of user.callbacks is the point of coupling. DeliverCallbackJob does not and should not care how we get a (possibly empty) list of callbacks from our user record.

Re-writing our Test

A better test example - one that would have exposed this tendency to couple to the ActiveRecord API - might read like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
describe "#perform!" do
  subject { deliverer.perform! }

  let(:size)      { 4 }
  let(:callback)  { double(callback_url: "foo.bar.com/callback") }
  let(:callbacks) { Array.new(size, callback) }

  before { expect(deliverer).to receive(:callbacks).and_return(callbacks) }

  it "sends a callback to all of the user's associated Callback#callback_url endpoints" do
    expect(Net::HTTP).to receive(:post).with("foo.bar.com/callback").exactly(size).times
    subject
  end
end

The key difference here is the assertion we’ve added to the before hook:

1
before { expect(deliverer).to receive(:callbacks).and_return(callbacks) }

We’ve implicitly augmented the interface of DeliverCallbackJob by a method #callbacks.

This is a really good thing! Now, we have a place to put the logic of “gather all the Callback records against which I need to deliver the callback payload at hand”. And as such, we have a new point at which we can introduce stubs or mocks in our tests.

In order to get this spec passing, we might refactor our class like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class DeliverCallbackJob
  def perform!
    callbacks.each do |callback|
      Net::HTTP.post(callback.callback_url)
    end
  end

  private

  def callbacks
    # This is now an implementation detail, as opposed to a closely coupled
    # and hidden dependency stowed away within the `perform!` method.

    # For now, we're using ActiveRecord so implementation may be unchaged
    # relative to our first version of this class... but if we drop Rails/AR
    # in the future, we have a clearly documented spot to change this logic.

    user.callbacks
  end
end

Recap

So what did we learn today? When you’re struggling to get through your TDD, it tends to indicate hidden coupling and dependencies. When you feel this happening, take a closer look at your implementation and see what can be moved into private methods. Then, in your test examples, make those dependencies explicit so that you can mock, stub, or whatever suits you.

Extend an Instance to Dynamically Include

| Comments

Here’s a question for you lovely developers (and non-developers, too!) out there: how can we dynamically include behavior from a specific module at runtime? Put differently, is it possible to include one specific module’s behavior after a class definition has already been read in? The typical Ruby include-behavior-from-a-module pattern looks something like this:

1
2
3
4
5
6
7
8
9
class Employee
  attr_accessor :can_be_promoted_to_manager

  include Trainable

  ...

  end
end

… where Trainable might extend the behavior of Employee instances something like this:

1
2
3
4
5
6
module Trainable
  def train!
    ## some training-related code here...
    self.can_be_promoted_to_manager = true
  end
end

This is a contrived example, clearly, but the pattern should look familiar: in Ruby, we love to encapsulate behavior into modules and include them - the name for this practice is separation of concerns. However, since we write the include directive within a class-level scope, we cannot for example use an instance of Employee itself to determine which module’s behavior we’d like to include. Let me explain with a quick example…

Different Modules for Different Training

Assume that instead of one Trainable module, we really have multiple modules that encapsulate distinct forms of employee training like follows:

1
2
3
4
module ManagerTrainable ; end
module TechnicianTrainable ; end
module ExecutiveTrainable ; end
module SalesPersonTrainable ; end

(NB: As you can see, I’ve omitted any actual methods from these modules above, but use your imagination! e.g. ExecutiveTrainable#soul_suck!, or maybe SalesPersonTrainable#learn_to_peddle!)

These modules may have also been written to reflect the various real-world types of Employee that are modeled in our Ruby application:

1
2
manager    = Employee.new(type: :manager)
technician = Employee.new(type: :employee)

Each of these employees also need to be able to go through some form of “training”. So, in order to ensure that all of Employee types can be train!‘ed in the appropriate manner, we can simply include all modules at the class level:

1
2
3
4
5
6
7
8
9
10
class Employee
  include ManagerTrainable
  include TechnicianTrainable
  include ExecutiveTrainable
  include SalesPersonTrainable

  ...

  end
end

Unforutnately, this has a nubmer of problems. First, it doesn’t communicate the developer’s intent very clearly. Someone coming along and reading this code in some months’ time might ask, “What’s going on - why all these variations of Trainable?” Second, we risk overwriting methods if the modules share the same/similar interfaces (for example, if each module implements #train! independently, you would have to change the method names, or apply logic to the ordering of our 4 include directives, or… well, you’d have a real problem on your hands to solve at that point).

extend at instance-level scope to simulate include instead.

Ruby thankfully offers a very elegant, dynamic solution to this problem. Since the logic of which Trainable module should be included is a function of the instance’s employee.type, we can use a lifecycle hook in combination with extend like so:

1
2
3
4
5
6
7
8
9
10
class Employee
  attr_accessor :type

  def initialize(opts = {})
    # first set the object's attributes as usual
    super
    # extend the appropriate module's behavior to the instance's Eigenclass (more on this shortly)
    extend(Kernel.const_get("#{self.type}Trainable"))
  end
end

In Rails, you could refactor this by using the provided object lifecycle hooks, e.g. after_initialize like so:

1
2
3
4
5
6
7
8
9
10
11
12
class Employee
  after_initialize :extend_trainable_behavior

  ...

  private

  def extend_trainable_behavior
    # So, for an `employee` whose type is 'manager', we are effectively including `ManagerTrainable`'s behavior only!
    extend(Kernel.const_get("#{self.type}Trainable"))
  end
end

Code Example: add Human#moniker behavior with both class-level include and instance-level extend

In the following code example (where all humans are named "Matt!", ha), you can see how either include or extend can be used (at different scopes, of course) to tell all Humans how to report their name. Note that console/IO output is indicated by commented out lines with a single # - you’d get the same output by copy-pasting this code into a live REPL.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
## Here's our nameable concern. We use both of Ruby's provided hooks - included and extended - to report
## when either event takes place at runtime. 

module Nameable
  def self.included(base)
    puts "#{self} included in #{base}."
  end

  def self.extended(base)
    puts "#{self} extended in #{base}."
  end

  def moniker
    "Matt!"
  end
end

class HumanWithNormalInclude
  include Nameable
end
# Nameable included in HumanWithNormalInclude

class HumanWithInstanceExtend
  def initialize
    super
    extend_behavior
  end

  def extend_behavior
    extend Nameable
  end
end

included_human = HumanWithNormalInclude.new
included_human.moniker
# "Matt!"

extended_human = HumanWithInstanceExtend.new
# Nameable extended in #<HumanWithInstanceExtend:0x007fa939a0da40>.
# <HumanWithInstanceExtend:0x007fa939a0da40>
extended_human.moniker
# "Matt!"

Black magic?! How does it work?!!

To understand why extending at an instance scope works the way it does, it will help to understand the concept of “eigenclasses” in Ruby. First, let’s define eigenclass and then write a sort of helper method that tells us any object’s eigenclass on demand.

Eigenclass: a dynamically created anonymous class that Ruby inserts into the method lookup path any time at least one singleton method is added to an object. (I added the last bit in italics… I think it’s right, ha.)

1
2
3
4
5
6
7
class Object
  def eigenclass
    class << self  # this opens us to the scope of the receiver's eigenclass
      self         # return the eigenclass itself
    end
  end
end

To tie the eigenclass concept back into our previous code example, let’s try to answer the question: Where does extended_human.moniker actually “live”?

If we query both of our instances’ classes, it’s not immediately clear where our extended_human instance’s #moniker method lives…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[26] pry(main)> included_human.class.ancestors
[
  [0] HumanWithNormalInclude < Object,
  [1] Nameable,
  [2] Object < BasicObject,
  [3] PP::ObjectMixin,
  [4] Kernel,
  [5] BasicObject
]
[27] pry(main)> extended_human.class.ancestors
[
  [0] HumanWithInstanceExtend < Object,
  [1] Object < BasicObject,
  [2] PP::ObjectMixin,
  [3] Kernel,
  [4] BasicObject
]

I typically query an object’s class’s ancestors when I want to see its inheritance hierarchy (where ‘inheritance’ is a combination of mixins and classical inheritance, i.e. class Foo < Bar) as part of finding where a particular method might have come from. This exercise reminds us that in Ruby, the Kernel module (responsible for eval, gets, puts, etc.) is quite high up in the object hierarchy, and as such its methods/behaviors are made available practically everywhere in a Ruby program.

Notably absent from extended_human.class.ancestors, however, is any reference to our Nameable mixin.

So - where is extended_human.moniker coming from then? Let’s instead look through the instances’ eigenclasses’ ancestors:

Ruby’s method lookup path flows through eigenclasses first!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[29] pry(main)> included_human.eigenclass.ancestors
[
  [0] HumanWithNormalInclude < Object,
  [1] Nameable,
  [2] Object < BasicObject,
  [3] PP::ObjectMixin,
  [4] Kernel,
  [5] BasicObject
]
[30] pry(main)> extended_human.eigenclass.ancestors
[
  [0] Nameable,  # Here it is!
  [1] HumanWithInstanceExtend < Object,
  [2] Object < BasicObject,
  [3] PP::ObjectMixin,
  [4] Kernel,
  [5] BasicObject
]

Aha - found it! Note that for our extended_human, Nameable is the very first ancestor in its method-lookup hierarchy. This is because we extend‘ed Nameable directly into extended_human’s eigenclass, as opposed to include‘ing it in its containing class definition. And once again, to display this point differently:

1
2
3
4
[31] pry(main)> extended_human.eigenclass.ancestors - extended_human.class.ancestors
[
  [0] Nameable
]

By making use of an anonymous class under the hood at runtime, Ruby gives us the ability to dynamically mixin behaviors at all levels of our programs - namely in our example, at both the class and instance level! Wo-man, I <3 Ruby :)

How Fixed Gear Bicycling Taught Me About Dependency Injection

| Comments

I became inspired to write this blog post after watching a great @sandimetz talk which she gave at RailsConf 2015. Her subject was primarily the Null Object Pattern, but she extends the fundamental principle - that objects, not conditionals nor excruciatingly tailored classes, should be used to encapsulate your system’s behaviors - to the practical aspects of injecting dependencies required by your domain’s fundamental objects. Let’s take a look at what I mean by using a real-world example: my recently acquired (and already banged up - I’ve had two flat tires already!) fixed gear (a.k.a “fixie”) bike. I want you to start asking yourself: if you were given the task of modeling in Ruby a fixed-gear bicycle alongside a “normal” or freewheel bicycle, what tools would you reach for - inheritance, composition, or otherwise?

Modeling our Bicycle

Let’s throw down a little code to start bringing our bicycle domain to life:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class Bicycle
  attr_reader :frame_size, :color

  def initialize(frame_size, color)
    @frame_size, @color = frame_size, color
  end

  def number_of_speeds
    freewheel ? gear_count : 1
  end

  def gear_count
    1
  end

  def freewheel
    true
  end
end

Cool - now we can instantiate a new Bicycle with our frame size and color preferences. Moreover, we have a sensible default (at least for urban NYC bicycling) for our drivetrain set to single-speed and freewheel. (Note that ‘freewheel’ is distinct from fixed-gear. On a fixie, for example, it is not possible for the chain to disengage from the crank arms, i.e. the motion of the rider’s pedals. On a freewheel bicycle, you can stop the motion of the pedals beneath you and the drivetrain will continue turning, allowing for what we call ‘coasting’.)

This is all well and good - until you find yourself moving to Brooklyn. In Brooklyn, fixed gear bikes are more popular, and it’s probably got something to do with a hipster resurgence.

So now let’s use our knowledge of OO and Ruby to model our fixed gear bicycle.

Reach for inheritance first…?

Here is a perfectly viable implementation of our fixie bicycle variant:

1
2
3
4
5
class Fixie < Bicycle
  def freewheel
    false
  end
end

Easy! All we did was inherit from Bicycle, and modify the #freewheel method to instantiate a non-freewheel single-speed bicycle.

Now let’s say, however, that your roommate would prefer a multi-speed bicycle. Once again using inheritance we may write some code like so:

1
2
3
4
5
class MultiGear < Bicycle
  def gear_count
    @gear_count ||= 18
  end
end

And once again, this has solved our problem. But, while effective, this solution raises a number of concerns. Mainly, we already have two distinct classes in order to account for two slight variations in Bicycle types. What do you imagine will happen to our system as the number of different variations grows? Put differently:

  • Do we really need distinct, unique classes to model a single variation in the characteristics of our bicycle?
  • Does the organization of our code and the patterns therein easily communicate the distinctions we’re attempting to convey?
  • Are we satisfied with the idea that, should our Bicycles change in other aspects in the future, we’ll continue to open new classes?

Where inheritance breaks down (pun intended)

To focus on the last concern that we just raised, let’s say you’ve accepted a job as a delivery boy for a local Chinese restaurant. Most of those guys, since they’re riding for hours a day, enhance their drivetrain with an electric motor which looks something like this:

You’ve been asked to allow your system to model this new Bicycle variant. Perhaps you could reach for inheritance once again and end up with something like this:

1
2
3
4
5
class ElectricMotorAugmented < Bicycle
  def drivetrain
    :electric_motor
  end
end

This may meet the needs of a relatively basic system, but let’s say your boss asks you to run a report…

Boss: “Hey you, new gal! Get me a breakdown of all the bicycles in New York City with an approximation of their top speeds. We are examining the relationship between bicycle accidents and their drivetrain mechanism - we’ve been hearing a lot lately about these electric-motor-augmented bicycles getting into accidents at faster speeds than non-electric bicycles.”

Since your boss seems to be asking for data concerning the relationship between bicycles’ drivetrain mechanism and their approximate ‘top speed’, you set out to run the report with some code like this:

1
Bicycle.all.map { |bike| [ bike.class.name, bike.top_speed ] }

Now, you need to re-open all of your classes and implement #top_speed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class Bicycle
  def top_speed
    # in units of MPH
    20
  end
end

class MultiGear < Bicycle
  def top_speed
    25
  end
end

class ElectricMotorAugmented < Bicycle
  def top_speed
    30
  end
end

Gheesh, that was kind of a lot of work - we had to open 3 different classes to find a home for our top speed approximations. You can see that our pattern - which is built on top of inheritance - could certainly become unwieldy and difficult to maintain as our system grows.

A better pattern: inject your dependencies!

I think this pattern really speaks for itself, so I’ll let the code do most of the talking here. Instead of inheritance, if we reached for dependency injection our system may have turned our more like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
class Drivetrain
  attr_reader :freewheel, :gear_count, :electric_motor_augmented

  # NOTE: we're using Ruby 2.1+ required keyword argument syntax here
  # https://robots.thoughtbot.com/ruby-2-keyword-arguments
  def initialize(gear_count:, freewheel:, electric_motor_augmented: false)
    @gear_count, @freewheel, @electric_motor_augmented =
      gear_count, freewheel, electric_motor_augmented
  end

  def top_speed
    speed =  (gear_count > 1) ? 25 : 20
    speed += electric_motor_augmented ? 10 : 0
  end
end

class Bicycle
  attr_reader :frame_size, :color, :drivetrain

  def initialize(frame_size, color, drivetrain = Drivetrain.new)
    @frame_size, @color, @drivetrain =
      frame_size, color, drivetrain
  end

  def gear_count
    drivetrain.gear_count
  end

  def top_speed
    drivetrain.top_speed
  end

  def number_of_speeds
    drivetrain.freewheel ? drivetrain.gear_count : 1
  end
end

Now we can easily instantiate our various Bicycle types and get #top_speed data from them:

1
2
3
4
5
6
7
8
9
fixie        = Bicycle.new("57 cm", "black", Drivetrain.new(gear_count: 1,  freewheel: false))
single_speed = Bicycle.new("57 cm", "black", Drivetrain.new(gear_count: 1,  freewheel: true))
multi_speed  = Bicycle.new("57 cm", "black", Drivetrain.new(gear_count: 18, freewheel: true))
motorized    = Bicycle.new("57 cm", "black", Drivetrain.new(gear_count: 18, freewheel: true, electric_motor_augmented: true))

fixie.top_speed        #=> 20
single_speed.top_speed #=> 20
multi_speed.top_speed  #=> 25
motorized.top_speed    #=> 35

“Isolate what is different”

By isolating in our mind what was different among our bicycle variations, we were able to extract it out into its own Drivetrain dependency. The real benefit of doing this is that we can inject this dependency into our Bicycle instances as we need! No more sublcassing Bicycle endlessly as variation after variation of bike requires modeling in our system. You can envision this pattern of dependency injection coming in handy as your system grows and different attributes of Bicycle start to vary. Have you seen the foldable, transportable frame style of bikes?

Using dependency injection, we can account for this variable attribute pretty succinctly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Frame
  attr_reader :color, :size, :foldable

  def initialize(color:, size:, foldable: false)
    @color, @size, @foldable = color, size, foldable
  end
end

class Bicycle
  attr_reader :frame, :drivetrain

  def initialize(frame = Frame.new, drivetrain = Drivetrain.new)
    @frame, @drivetrain = frame, drivetrain
  end

  # rest of code omitted for brevity...
end

And here is our foldable Bicycle:

1
2
3
4
foldable = Bicycle.new(
             Frame.new(color: "black", frame_size: "57 cm", foldable: true),
             Drivetrain.new(gear_count: 1,  freewheel: false)
)

Conclusion

Watching Sandi’s talk (and writing up this post) have certainly changed my opinion on using inheritance versus injecting dependencies into my domain model. I was inspired to write this post by Sandi and the joy I’ve been getting from riding fixie around Brooklyn for the past couple of months.

I hope you’ve found this blog post interesting and educational. Please let me know in the comments below!

Using Lambdas as Computed Hashes in Ruby

| Comments

I recently read a blurgh post about the interesting, quirky aspects of lambdas in Ruby.

One feature that stood out to me was lambdas’ ability to stand in where hashes would normally be used.

This functionality is made possible because, in Ruby, lambdas can be called in any of the following ways:

1
2
3
4
5
l = lambda { |x| puts x }

l.call("Foo") => "Foo"
l.("Foo")     => "Foo" (admittedly this syntax is bizarre to me...)
l["Foo"]      => "Foo" (looks like hash access using the typical Hash#[] method...)

The third way is the bridge between lambdas and the concept of “computed hashes”. I searched for a definition of computed hash, but didn’t find much consensus. The working definition for this post would be something like:

A hash object whose values can be initialized (read: computed) at runtime based on logic declared elsewhere in the program.

Putting It Together: An Example

When might the use of computed hashes, i.e. lambdas, be a favorable replacement to a normal hash?

Let’s say you’re writing tests for your program and you want to add a degree of “fuzz testing”. As an example, perhaps one of your classes is initialized with first_name and last_name attributes (note the initialize method expects to receive a Hash-like argument as input in sticking with Rails convention), and then generates a slug to be used for query string parameters elsewhere in your application:

1
2
3
4
5
6
7
8
9
10
11
12
class Person
  attr_reader :first_name, :last_name

  def initialize(hash_like_object = {})
    @first_name = hash_like_object[:first_name]
    @last_name  = hash_like_object[:last_name]
  end

  def slug
    @slug ||= "#{first_name.downcase[0, 3]}-#{last_name.downcase[0, 3]}"
  end
end

Now let’s generate an instance of the Person class to make sure everything looks OK:

1
2
3
4
ruby-2.2.2-p95 (main):0 > matt = Person.new(first_name: "Matt", last_name: "Campbell")
#<Person:0x007fca00179bd0 @first_name="Matt", @last_name="Campbell">
ruby-2.2.2-p95 (main):0 > matt.slug
"mat-cam"

This checks out. Our slug method is pretty dumb, but let’s say it becomes more complex: we amend slug to handle duplicates. As it stands, “Arthur MacCormack” and “Art MacNulty” will have the same slug and so are not uniquely identifiable by their slug.

The point of interest here is NOT the logic you end up implementing to make slug more unique. What’s of interest is how you can fuzz test whatever logic you end up implementing throughout your test suite.

Faker + Computed Hash = Fuzz Testing

Faker is a great library for generating random data, which I most typically use in conjunction with FactoryGirl to generate instances of my models (that is, the Ruby classes that represent the domain I’m modelling in the application).

Let’s see how we can utilize a computed hash to improve the degree of fuzz testing in my unit tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
require 'faker'

# Here is the Person class definition again for reference.

class Person
  attr_reader :first_name, :last_name

  def initialize(hash_like_object = {})
    # The next two lines work because our hash-like-object, in some cases a lambda,
    # can be called using the same [] syntax as Hash#[]
    @first_name = hash_like_object[:first_name]
    @last_name  = hash_like_object[:last_name]
  end

  def slug
    @slug ||= "#{first_name.downcase[0, 3]}-#{last_name.downcase[0, 3]}"
  end
end
1
2
3
# Construct our computed hash lambda...

randomizer = lambda { |k| Faker::Name.send(k) }

And, voilà, we can initialize Person instances using our randomizer (which is in fact a lambda, and not a hash):

1
2
3
4
5
6
7
8
ruby-2.2.2-p95 (main):0 > person = Person.new(randomizer)
#<Person:0x007f81f0dfc8b0 @first_name="Nedra", @last_name="Pouros">
ruby-2.2.2-p95 (main):0 > person.first_name
"Nedra"
ruby-2.2.2-p95 (main):0 > person.last_name
"Pouros"
ruby-2.2.2-p95 (main):0 > person.slug
"ned-pou"

tl;dr

Need to generate pseudo-random instances of your classes in order to utilize fuzz testing across your test suite? Try initializing your instances using a computed hash, which in Ruby can be implemented using a lambda and call-ing it using the hash accessor Hash#[] that you’re used to seeing.

Domain Modeling Mom ‘N Pop Merchants in a Mobile Finance Platform

| Comments

In my primary work project currently, I’m working with a small startup to build a mobile finance platform for the developing world.

The reason: people in the developing world are largely unbanked. Estimates vary, but around 59% of adults in developing economies don’t have an account at a financial institution. That said, using modern technologies (read: really cheap mobile phones, Bitcoin, and the web) it should be possible to bring banking-like services to the 2.5 billion people that use cash day-to-day almost exclusively.

Where the idea of Merchants comes in to play

In order to get our targeted users off cash, and onto a mobile finance platform, we need merchants in the developing world to be ready to accept payment via this mobile finance platform (which I’ll henceforth refer to as “MobiCommerce” for short).

In many ways, ‘merchants’ in our domain will be similar to everyday ‘users’, in that they’ll be sending and receiving funds virtually via MobiCommerce.

Here is the User resource’s schema for staters:

1
2
3
4
5
6
7
8
9
10
11
12
13
create_table "users", force: :cascade do |t|
  t.string   "phone_number",                        null: false
  t.datetime "created_at",                          null: false
  t.datetime "updated_at",                          null: false
  t.integer  "status",                default: 0,   null: false
  t.string   "name"
  t.string   "pin"
  t.decimal  "balance",               default: 0.0, null: false
  t.string   "device_id"
  t.integer  "balance_inquiry_count"
  t.string   "locale",                              null: false
  t.integer  "referrer_id"
end

The main attributes of interest for a particular User are phone_number, pin, and balance (at least insofar as executing a transaction on the platform is concerned).

What’s different about a merchant?

Originally, my answer to this question was something along the lines of: “Not that much is different. I’ll basically just need to slightly different messages in the transaction process.” For example, we’re planning to charge a small fee to merchants in order to accept MobiCommerce as a payment option at their shops. So we’d need the system to identiy that a merchant is on the receiving end of a transfer/transaction, and alert them via SMS accordingly - including notifying the merchant of the fee that will be taken out.

To domain model this, I first reached for inheritance:

1
2
3
4
5
6
7
8
9
10
# app/models/user/merchant.rb
class User::Merchant < User
  after_create :onboard_merchant!
  
  private
  
  def onboard_merchant!
    TELEPHONY_CLIENT.send_sms(to: self.phone, message: "Reply with your business name", from: self.device_id)
  end
end

However, this just felt wrong to me. Typically, subclassing an ActiveRecord-backed model in Rails is best for organizing a limited & specific set of domain-specific behavior. The classic example is something like:

1
class SignUp < User ; end

Now, that SignUp class is a great place to put things like validates :password_confirmation, presence: true kinda’ business logic. The term for this in Ruby is “form models”. Any domain and/or business logic that pertains uniquely to signing up a user now has a home. This class gives you a perfect place to encapsulate that behavior.

The Break Point

I quickly hit a point with my Merchant resource where I realized it had outgrown its inheritance from the User class. Instead of simply adding merchant-specific business logic & behaviors into this class, I found myself overwriting many of the methods inherited from User in order to tweak the desired behavior when a merchant was involved in a transaction.

A merchant is really just a user with an associated Business

Thanks to my good friend Chris Lee, I arrived at a much better solution to this “where to house my Merchant business logic” predicament.

Remember, I originally inherited from User because I still needed all the logic that connected two people (merchants or non-merchants alike) doing a financial transaction - either a payment to a store, or a Venmo-style peer-to-peer transfer.

Chris pointed out that I could instead organize my merchant-related logic into its own model, called Business. Now, a “merchant” in my system is simply:

1
2
3
4
5
6
7
8
class User < ActiveRecord::Base
  # most code omitted...
  has_one :business

  def merchant?
    business.present?
  end
end

That is, it’s just a user instance with an associated business. Much cleaner, much more elegant, and much more expressive. Let’s look at some examples.

First, this is how I can assess the merchant-specific transaction fee within my Transaction class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Transaction < ActiveRecord::Base
  # most code omitted...
  belongs_to :sender, class_name: User
  belongs_to :receiver, class_name: User

  before_create :assess_merchant_fee!, if: :receiver_is_merchant?

  private

  def receiver_is_merchant?
    receiver.merchant?
  end

  def assess_merchant_fee!
    # deduct the fee from the amount received by the merchant and notify them...
  end
end

I was so relieved that I had a home for all of my Business-related logic, that I felt compelled to write this blog post. I hope you’ve enjoyed reading it :smile:

P.S. From what I understand, Facebook uses this same approach to managing their Business Pages.

How-To: Manual JSON-endpoint Testing Made Easy

| Comments

Let’s say your shiny new web application relies upon a 3rd party REST API like, say, Twilio. Those guys and gals do a really nice job adhering to REST principles when it comes to their API’s design. But as a software developer trying to communicate with their API, what are the practical implications of having a RESTful API on the other end of the wire?

Quick REST refreshment…

REST is a fairly large set of principles, but for this example we’ll focus on one aspect: The ‘R’ in REST stands for ‘REpresentational (State Transfer)’.

All URLs referenced in Twilio’s documentation have the following base:

1
https://api.twilio.com/2010-04-01

Now, we want to dig a little deeper into the “subresources” that Twilio exposes for our account. Take a look at the following URL endpoint (truncated for brevity):

1
https://api.twilio.com/2010-04-01/Accounts/AC228b9.../SMS/Messages/SM1f0e8ae6ade43cb3c0ce4525424e404f.json

Because the Twilio API is RESTful, we can observe the URI itself and garner quite a bit of information about the resource we’re requesting. In this case, it’s clearly a particular SMS instance generated by (presumably our) account ID AC228b9. The “representation” of this SMS resource is extremely intuitive, and we have REST to thank for it!

‘R’ is for ‘Representational’

But I want to focus now on something I haven’t yet mentioned regarding the URL above - specifically, the .json suffix. RESTful APIs, like Twilio’s, typically allow a client (e.g. web bowser, curl, etc.) to request a particular representation of the desired resource. JSON has become an extremely popular such representation because, “It is easy for humans to read and write… It is easy for machines to parse and generate.” Given its ease-of-use and ubiquity across the interwebs, you will inevitably run into JSON endpoints as a web developer. There are many tools for working with the JSON response, but I think I may have come across one of the best strategies particularly for REPL-driven development and prototyping enthusiasts…

Step 1: Get jq

jq is a command-line utility for parsing JSON input from STDIN (or from files, etc.; it’s a BASH utility after all). Install it with:

brew install jq

Now, play around with it - try something like this (where jq . kicks off a jq process which waits for your input from STDIN):

1
2
3
4
5
⇒  jq .
{"hello":"world"}
{
  "hello": "world"
}

Step 2: GET (via curl) your JSON endpoint

There is a great resource for working with JSON called JSON Test. We can easily combine one of the JSON Test endpoints with curl to explore the (hypothetical) JSON representation of a resource like so:

1
2
3
4
5
6
⇒  curl -s http://headers.jsontest.com
{
   "Host": "headers.jsontest.com",
   "User-Agent": "curl/7.30.0",
   "Accept": "*/*"
}

Step 3: Combine steps 1 and 2 - it’s that easy!

Now that we have jq and curl down, we simply put them together by piping curl’s STDOUT into the jq program like so:

1
2
3
4
5
6
⇒  curl -s http://headers.jsontest.com | jq .
{
  "Accept": "*/*",
  "User-Agent": "curl/7.30.0",
  "Host": "headers.jsontest.com"
}

It might not look like you get much advantage by using jq over the standard curl formatted output - but jq really shines when you want to be able to sift through a very large JSON hash. Let’s say you’re reading from a fake.json file which contains hundreds of lines of JSON (you can take a quick look at the file in this gist). That bad boy has 310 lines of JSON to be exact - ain’t nobody got time fo’ dat! We can read the file and pipe the output into our trusty little friend jq; and then we can quickly identify, say, the first person’s favorite fruit as follows:

1
2
⇒  jq ".[0].favoriteFruit" < fake.json
"strawberry"

(Note: I’m using " above because zsh passes arguments to programs a little differently than bash. If using bash, you should be able to do simply: jq .[0].favoriteFruit without the quotes.)

By doing [0] I obtain the first element of the JSON array (which contains information relating to our first person); and jq allows me to pluck the value at a given key - in this case, the favoriteFruit key.

Conclusion

By combining curl with jq, you should never again have to struggle with manually quick-checking any server-generated JSON response that comes your way. Let me know if you found this helpful in the comments below!