<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[@asbjornenge]]></title><description><![CDATA[Swinging madly across the sun]]></description><link>https://asbjornenge.com/</link><generator>Ghost 2.16</generator><lastBuildDate>Thu, 02 Apr 2026 23:28:45 GMT</lastBuildDate><atom:link href="https://asbjornenge.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The non-problem of deepfakes]]></title><description><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Deepfake">Deepfakes</a> have been discussed for a long time, and have been flagged as a major problem by many prominent people. I stumbled across it again in one of the recent episodes of the <a href="https://www.samharris.org/podcasts">Waking Up</a> podcast. I can't remember if it was the one with <a href="https://www.samharris.org/podcasts/making-sense-episodes/290-what-went-wrong">Marc Andreessen</a> or the one</p>]]></description><link>https://asbjornenge.com/the-non-problem-of-deep-fakes/</link><guid isPermaLink="false">62ee5d2e6f4cd5000108ec5a</guid><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Sat, 06 Aug 2022 13:12:13 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1572435555646-7ad9a149ad91?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fGNyeXB0b2dyYXBoeXxlbnwwfHx8fDE2NTk3OTEwNjA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=1080" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1572435555646-7ad9a149ad91?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fGNyeXB0b2dyYXBoeXxlbnwwfHx8fDE2NTk3OTEwNjA&ixlib=rb-1.2.1&q=80&w=1080" alt="The non-problem of deepfakes"><p><a href="https://en.wikipedia.org/wiki/Deepfake">Deepfakes</a> have been discussed for a long time, and have been flagged as a major problem by many prominent people. I stumbled across it again in one of the recent episodes of the <a href="https://www.samharris.org/podcasts">Waking Up</a> podcast. I can't remember if it was the one with <a href="https://www.samharris.org/podcasts/making-sense-episodes/290-what-went-wrong">Marc Andreessen</a> or the one with <a href="https://www.samharris.org/podcasts/making-sense-episodes/280-the-future-of-artificial-intelligence">Eric Schmidt</a>. Either way, even these highly technologically advanced people seem to miss an obvious mitigation to this problem; <strong><a href="https://en.wikipedia.org/wiki/Digital_signature">digital signatures</a></strong>.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://asbjornenge.com/content/images/2022/08/digisig.png" class="kg-image" alt="The non-problem of deepfakes"></figure><!--kg-card-end: image--><p>By having prominent <strong>institutions</strong>, content <strong>creators</strong> and <strong>people</strong> (everyone?) provide a cryptographically signed digital signature along side the content they publish - anyone can <strong>verify</strong> the signature. They will of course have to decide for themselves if they trust institution, creator or person, but they can at least verify the content's origin.</p><p>If a piece of content shows up <strong>without</strong>, or with a <strong>completely new</strong>, digital signature, that would be a sign to be <strong>very skeptical</strong> of this content.</p><p>If a piece of content shows up from a prominent content creator <strong>with a valid digital signature</strong>, one that has been used multiple times in the past etc. we can be <strong>less skeptical</strong> of this content.</p><p><strong>None of this is absolute</strong>. Private keys can be lost, content creators can be pressured, enticed or mislead, etc. So we still need to tread carefully. But it seems to me this would be a <strong>very good line of defense</strong> against <strong>deepfakes</strong> and also other "<strong>fake news</strong>".</p><p>And a lot of this stuff can be <strong>automated</strong> but digital tools and social media platforms!</p><p>As a fun anecdote, I once <a href="https://twitter.com/asbjornenge/status/862582507363069952">pitched</a> this idea to Wikipedia's creator <a href="https://twitter.com/jimmy_wales">Jimmy Wales</a> for his <a href="https://en.wikipedia.org/wiki/WikiTribune">WikiTribune</a>. He seemed to think it was a non-problem 😅 🤦🏻‍♂️</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://asbjornenge.com/content/images/2022/08/Screenshot-from-2022-08-06-14-49-28.png" class="kg-image" alt="The non-problem of deepfakes"><figcaption>Jimmy doesn't get it</figcaption></figure><!--kg-card-end: image--><p>enjoy.</p>]]></content:encoded></item><item><title><![CDATA[Calling TezID]]></title><description><![CDATA[<p>In the previous post <a href="https://www.asbjornenge.com/tezid/">here</a> we introduced TezID and why we built it.</p><p>In this post we will outline <strong>how to use</strong> TezID.</p><p>We will use an example of a ICO contract that required participants to have 2 valid TezID proofs in order to qualify for registration. We will use</p>]]></description><link>https://asbjornenge.com/calling-tezid/</link><guid isPermaLink="false">6062f3926fd6220001c6c066</guid><category><![CDATA[tezos]]></category><category><![CDATA[tezid]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Tue, 30 Mar 2021 10:57:23 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1524514587686-e2909d726e9b?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEzfHxlbmdpbmVlcmluZ3xlbnwwfHx8fDE2MTcwOTk5MzU&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=1080" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1524514587686-e2909d726e9b?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEzfHxlbmdpbmVlcmluZ3xlbnwwfHx8fDE2MTcwOTk5MzU&ixlib=rb-1.2.1&q=80&w=1080" alt="Calling TezID"><p>In the previous post <a href="https://www.asbjornenge.com/tezid/">here</a> we introduced TezID and why we built it.</p><p>In this post we will outline <strong>how to use</strong> TezID.</p><p>We will use an example of a ICO contract that required participants to have 2 valid TezID proofs in order to qualify for registration. We will use S<a href="https://smartpy.io/">martPy</a> code in our examples.</p><p>Let's dive right into code!</p><!--kg-card-begin: markdown--><pre><code class="language-python">## Types
#

TGetProofsRequestPayload = sp.TRecord(
    address=sp.TAddress, 
    callback_address=sp.TAddress, 
    callback_entrypoint=sp.TString
)
TGetProofsResponsePayload = sp.TRecord(
  address = sp.TAddress,
  proofs = sp.TMap(sp.TString, sp.TRecord(
    register_date = sp.TTimestamp, 
    verified = sp.TBool
  ))
)

## Contract
#

class ICO(sp.Contract):

  def __init__(self, tezid, requiredProofs):
    self.init(
      tezid = tezid,
      requiredProofs = requiredProofs,
      participants = {}
    )

  @sp.entry_point
  def signup(self):
    c = sp.contract(TGetProofsRequestPayload, self.data.tezid, entry_point=&quot;getProofs&quot;).open_some()
    sp.transfer(sp.record(address=sp.sender, callback_address=sp.self_address, callback_entrypoint=&quot;register&quot;), sp.mutez(0), c)

  @sp.entry_point
  def register(self, ptr):
    sp.if sp.sender != self.data.tezid:
      sp.failwith('Only TezID can register')
    sp.set_type(ptr, TGetProofsResponsePayload)
    validProofs = sp.local(&quot;validProofs&quot;, [])
    sp.for requiredProof in self.data.requiredProofs:
      sp.if ptr.proofs.contains(requiredProof):
        sp.if ptr.proofs[requiredProof].verified:
          validProofs.value.push(requiredProof)
    sp.if sp.len(self.data.requiredProofs) == sp.len(validProofs.value):
      self.data.participants[ptr.address] = {}
</code></pre>
<!--kg-card-end: markdown--><p>Ok, so here we have some datatypes and a Tezos Smart Contract called ICO. </p><p>The datatypes <code>TGetProofsRequestPayload</code> and <code>TGetProofsResponsePayload</code> represent the structure of the data payload when calling and receiving a callback from TezID.</p><p>The ICO contract has the <code>__init__</code> function to set some initial storage data. In addition is has two entrypoints; <code>signup</code> and <code>register</code>.</p><h3 id="-__init__">@__init__</h3><p>Init is called when we create the contract. It sets some intial data for the contract;<br><br>* tezid ~ this is the address of the TezID contract<br>* requiredProofs ~ a list of required TezID proofs<br>* participants ~ the registered participants of the ICO</p><h3 id="-signup">@signup</h3><p>Signup can be called by anyone that attempts to sign up for the ICO. Signup will trigger a call to TezID's <code>getProofs</code> endpoint in order to retrieve the registered proofs for the calling address, and states that it would like it's own <code>register</code> endpoint to be called with this information (this is what we call a callback pattern, and it's how Tezos contracts can interact - atleast for now).</p><h3 id="-register">@register</h3><p>Register is where TezID will call back with the registered proofs for some address. Note that it is set up to only allow TezID to call this endpoint. It received some proofs from TezID, compares to the required proofs set in <code>requiredProofs</code>, checks their validity and adds the address to the <code>participants</code> map if it all checks out.</p><p>And that is it 🎉 </p><p>We now have created a contract that requires participants to have some proofs registered on TezID and we have more confidence that the registered participants are real people.</p><h3 id="a-smartpy-testcase">A SmartPy testcase</h3><p>The test code for this ICO contract is quite long since it require all sorts of setup, but here it is if you are interested;</p><!--kg-card-begin: markdown--><pre><code class="language-py">@sp.add_test(name = &quot;Call TezID from other contract&quot;)
def test():
  admin = sp.test_account(&quot;admin&quot;)
  user = sp.test_account(&quot;User&quot;)
  user2 = sp.test_account(&quot;User2&quot;)
  user3 = sp.test_account(&quot;User3&quot;)
  cost = sp.tez(5)
  proof1 = sp.record(
    type = 'email'
  )
  proof2 = sp.record(
    type = 'phone'
  )
  proofVer1 = sp.record(
    tzaddr = user.address,
    type = 'email'
  )
  proofVer2 = sp.record(
    tzaddr = user.address,
    type = 'phone'
  )

  scenario = sp.test_scenario()
  c1 = TezID(admin.address, cost)
  scenario += c1
  c2 = ICO(c1.address, [&quot;email&quot;,&quot;phone&quot;])
  scenario += c2
  
  ## A user with the correct valid proofs can register as participant
  #
  scenario += c1.registerAddress().run(sender = user, amount = sp.tez(5))
  scenario += c1.registerProof(proof1).run(sender = user, amount = sp.tez(5))
  scenario += c1.registerProof(proof2).run(sender = user, amount = sp.tez(5))
  scenario += c1.verifyProof(proofVer1).run(sender = admin)
  scenario += c1.verifyProof(proofVer2).run(sender = admin)
  scenario += c2.signup().run(sender = user)
  scenario.verify(c2.data.participants.contains(user.address))
  
  ## A user without the correct valid proofs cannot register as participant
  #
  scenario += c1.registerAddress().run(sender = user2, amount = sp.tez(5))
  scenario += c1.registerProof(proof1).run(sender = user2, amount = sp.tez(5))
  scenario += c1.registerProof(proof2).run(sender = user2, amount = sp.tez(5))
  scenario += c1.verifyProof(proofVer1).run(sender = admin)
  scenario += c2.signup().run(sender = user2)
  scenario.verify(c2.data.participants.contains(user2.address) == False)
  
  ## A user not registered on TezID cannot register as participant
  #
  scenario += c2.signup().run(sender = user3)
  scenario.verify(c2.data.participants.contains(user3.address) == False)
  
  ## Only TezID can call register endpoiint
  #
  emailProof = sp.record(
      register_date = sp.timestamp(0),
      verified = True
  )
  phoneProof = sp.record(
      register_date = sp.timestamp(0),
      verified = True
  )
  proofs = {}
  proofs['email'] = emailProof
  proofs['phone'] = phoneProof
  pr = sp.record(address = user3.address, proofs = proofs)
  scenario += c2.register(pr).run(sender = user3, valid=False)
</code></pre>
<!--kg-card-end: markdown--><p>enjoy.</p>]]></content:encoded></item><item><title><![CDATA[TezID]]></title><description><![CDATA[<p><a href="https://tezid.net">TezID</a> is an identity oracle for Tezos 🤖✌️</p><p>It allows users to prove that they own certain digital property such as an email address, phone number, etc. And perhaps in the future a physical address and even government issued ID's.</p><h2 id="how-it-works">How it works</h2><p>At it's core TezID is a Smart Contract</p>]]></description><link>https://asbjornenge.com/tezid/</link><guid isPermaLink="false">60421aab46c54b000165062f</guid><category><![CDATA[tezos]]></category><category><![CDATA[identity]]></category><category><![CDATA[smart contracts]]></category><category><![CDATA[tezid]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Tue, 30 Mar 2021 10:57:11 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1601723897234-327147304013?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE4fHxpZGVudGl0eXxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=1080" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1601723897234-327147304013?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE4fHxpZGVudGl0eXxlbnwwfHx8&ixlib=rb-1.2.1&q=80&w=1080" alt="TezID"><p><a href="https://tezid.net">TezID</a> is an identity oracle for Tezos 🤖✌️</p><p>It allows users to prove that they own certain digital property such as an email address, phone number, etc. And perhaps in the future a physical address and even government issued ID's.</p><h2 id="how-it-works">How it works</h2><p>At it's core TezID is a Smart Contract with registered addresses and verified proofs for each address. A user might register his/her address and different proofs, and use the oracle to verify these proofs. </p><p>The flow:</p><ol><li>A <strong>user</strong> connects their wallet to the TezID dapp</li><li>The <strong>user</strong> registers their address with the TezID Smart Contract</li><li>The <strong>user</strong> registers one of the supported proofs with the TezID Smart Contract</li><li>The <strong>user</strong> requests a verification code from the oracle</li><li>The <strong>oracle</strong> verifies the uniqeness of the property (you cannot register the same email twice)</li><li>The <strong>oracle</strong> sends a verification code to the property</li><li>The <strong>user</strong> receives the verification code and enters it in the TezID dapp</li><li>The <strong>oracle</strong> marks the proof as verified in the Smart Contract</li></ol><p>🙌</p><h2 id="what-is-the-point">What is the point?</h2><p>It can be used from other Smart Contracts to verify that a Tezos address is associated with a real human being.</p><p>I'm interested in collective decisionmaking mechnisms. Some of these require unique identities. One example is <a href="https://en.wikipedia.org/wiki/Quadratic_voting">Quadratic Voting</a>. An example usage of TezID would be to create a QV voting contract that will query the TezID contract if a certain address has registered a government issued ID proof within the last year, before allowing this address to vote.</p><p>But who knows what other use-cases might pop up 😬✨</p><h2 id="what-about-security">What about security?</h2><p>You might wonder if it's such a good idea to connect your Tezos address to your email or phone number etc. 🤔 This is not a good idea at all! 😳</p><p>Luckily this is not how TezID works. 😅 On the Smart Contract TezID only stores <strong>that</strong> you have registered a proof; it's <em>type</em>, the <em>date</em> this happened and a boolean to indicate if the proof has been <em>verified </em>by the oracle.</p><p><strong>The TezID oracle only stores a hashed representation of your property</strong>. This means, even if the TezID oracle database was compromised, all an attacker would get was a list of hashes.</p><p>This is however not unproblematic. If an attacker get's ahold of the list of hashes, they could potentially attempt to hash known email addresses and compare to the list of hashes they have stolen.</p><p>Luckily this will not be so easy with the TezID hash table, since each hash is <strong>salted</strong> <a href="https://www.yubico.com/products/hardware-security-module/">a</a>nd generated using <strong>pbkdf2 </strong>(key stretching). We follow best practice principles outlined <a href="https://crackstation.net/hashing-security.htm">here</a>. 🔒</p><p>We have done what we can to limit the attack surface of TezID.</p><p>Also check out our <a href="#Tips">Tips</a> section for how to avoid linking your TezID address with your other Tezos addresses.</p><h2 id="why-is-it-costly-to-register-proofs">Why is it costly to register proofs?</h2><p>We have a added a cost for registering your address and proofs on TezID. At launch the cost will be around $10 for each, and we have the ability to adjust this over time.</p><p>We have introduced a cost here for several different reasons:</p><ul><li>To incetivice behaviour</li><li>To fund running the oracle</li><li>To fund further development of the oracle</li></ul><h3 id="to-incentivice-behaviour">To incentivice behaviour</h3><p>It's super important to discourage people to register multiple addresses on TezID. The main idea is to have 1 Tezos address map to 1 human being.</p><p>For many of our proof types this is impossible to ensure. But having a high cost for registering proofs is a great way to discourage this behaviour. You can register 2-3 addresses, sure. But if you try to register 1000 it's going to cost you 😅💸.</p><p>Contracts using TezID should also require multiple proofs be registered and atleast one that is not easily duplicated.</p><h3 id="to-fund-running-the-oracle">To fund running the oracle</h3><p>It's not free to run the TezID oracle. The cloud infrastructure has a cost. Sending out SMS'es has a cost. And the humans need to eat.</p><h3 id="to-fund-further-development-of-the-oracle">To fund further development of the oracle</h3><p>We have lot's more ideas we want to realize with TezID, and any additionalt funds after running costs will go to further development.</p><h2 id="renewing-proofs">Renewing proofs</h2><p>The current owner of a property on TezID can renew it at any time. If a property has not been renewed for 365 days (depending on type of property) anyone in control of this property can use it to register a new proof.</p><p>It is a good idea for other contracts using TezID to require proofs less than a year old.</p><h2 id="calling-from-other-contracts">Calling from other contracts</h2><p>You can see an example of how to use TezID from another contract <a href="https://www.asbjornenge.com/calling-tezid/">here</a>.</p><h2 id="what-about-w3c-decentralized-identity">What about W3C Decentralized Identity?</h2><p>We are aware of the work being done on W3C Decentralized Identity on Tezos spearheaded by <a href="https://www.spruceid.com/">Spruce Systems</a>. We see TezID as supplemental to this work. On TezID you can verify your digital (and perhaps in the future physical) property. This makes TezID more of an <em><strong>issuer</strong></em> in a Decentralized Identity context. And who knows, perhaps one day someone will build a DID issuer layer on top of TezID 😬🙊</p><p>Looking forward to any and all feedback be it praise or critizism 🙌</p><p>✨🚀</p><h2 id="tips">Tips</h2><h4 id="avoid-linking-your-tezid-to-your-other-tezos-addresess">Avoid linking your TezID to your other Tezos addresess</h4><p>In order to register on TezID you need a Tezos address with some XTZ. You can just make a new one and transfer some Tez to it, but then your "main" Tezos address and all other addresses it has been in contact with (sendt to/from) will be linkable, and if just one of those addresses is connected to your personal identity, all the others will too 😳🙈 Such is the nature of public blockchains.</p><p>However, one "trick" you can do is to create an empty address. Then go on an exchange and purchase some XTZ. Transfer the XTZ from the exchange to your newly created address and register that address on TezID. That way there is no link between your other Tezos addresses and TezID. The exchange still has the information linking your identity and your TezID address, but there is no link to your other addresses on-chain.</p><p>enjoy.</p>]]></content:encoded></item><item><title><![CDATA[TezPi]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Running <a href="https://tezos.com/">Tezos</a> on the <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/">Raspberry Pi 4</a>.</p>
<p>I wanted to see if I could build and run Tezos on the RPi4. I'm currently running a bakery in the cloud; 4 nodes and a local signer. But cloudmachines has a running cost, atleast if you want somewhat beefy machine with a</p>]]></description><link>https://asbjornenge.com/tezpi/</link><guid isPermaLink="false">5dcc5bf0d258560001777d25</guid><category><![CDATA[baking]]></category><category><![CDATA[docker]]></category><category><![CDATA[linux]]></category><category><![CDATA[tezos]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Wed, 13 Nov 2019 21:26:57 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1553406830-f6e44ac97624?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1553406830-f6e44ac97624?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="TezPi"><p>Running <a href="https://tezos.com/">Tezos</a> on the <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/">Raspberry Pi 4</a>.</p>
<p>I wanted to see if I could build and run Tezos on the RPi4. I'm currently running a bakery in the cloud; 4 nodes and a local signer. But cloudmachines has a running cost, atleast if you want somewhat beefy machine with a large SSD disk. RPi4 looks interesting since it now comes with a 4GB RAM model and a USB 3.0 interface for connecting a fast SSD.</p>
<p><strong>UPDATED 25.11.2019</strong></p>
<p>After discovering <a href="https://gitlab.com/tezos/tezos/issues/616">this bug</a> I moved from the default raspbian image to <a href="https://ubuntu.com/download/raspberry-pi">Ubuntu 19.10 for Raspberry Pi</a> running <code>aarch64</code> instead of the 32 bit <code>armv7</code>. I'm also running the rootfs from the USB SSD instead of the MicroSD card which was a significant perf increase.</p>
<h2 id="thehardware">The hardware</h2>
<ul>
<li><a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/">4GB version Raspberry Pi4</a></li>
<li><a href="https://www.komplett.no/product/1090447/datautstyr/lagring/harddiskerssd/ssd-25/crucial-bx500-480gb-25-ssd?noredirect=true?noredirect=true">Crucial BX500 480GB 2,5&quot; SSD</a></li>
<li><a href="https://www.komplett.no/product/911711/datautstyr/lagring/harddiskerssd/tilbehoer/st-lab-usb-30-to-sata-6g?noredirect=true?noredirect=true">ST Lab USB 3.0 to SATA 6G</a></li>
</ul>
<h2 id="thesoftware">The software</h2>
<p>I initially used that standard <a href="https://www.raspberrypi.org/downloads/raspbian/">raspbian</a> distribution, and chose the <code>Raspbian Buster Lite</code> image since I need no desktop for this project.</p>
<p>However, after discovering a bug with tezos + 32 bit architecture, I moved to <a href="https://ubuntu.com/download/raspberry-pi">Ubuntu 19.10 for Raspberry Pi</a> running <code>aarch64</code> instead. Procedure is exactly the same; flash the image to MicroSD card and boot up.</p>
<p>At the time of writing the Ubuntu iso ships with a the <code>5.3.0-1007</code> kernel. This kernel has som issues with memory &gt; 3GB, so if you have the 4GB version you need to mount the boot partition elsewhere and add the following to <code>usercfg.txt</code>:</p>
<pre><code>total_mem=3072
dtparam=audio=on
</code></pre>
<p>There is a newer kernel available though; <code>5.3.0-1012</code> which fixes this issue. So if you update to it you can remove these lines and enjoy the full 4GB.</p>
<p>If you want to run the rootfs from the USB SSD (which I highly recommend since it's a real perf. boost), flash the exact same Ubuntu image on the SSD disk too. After that we just have to modify the <code>root=</code> of <code>/boot/firmware/btcmd.txt</code> (keep the rest):</p>
<pre><code>root=/dev/sda2
</code></pre>
<p>Do that on both the MicroSD card's boot partition, and the SSD disks. Sometimes ubuntu mounts the SSD boot partition on <code>/boot/firmare</code> for some reason.</p>
<p>Only other software I installed was <a href="https://www.docker.com/">Docker</a>. I ❤️ Docker. Cannot live without it. Get it!</p>
<pre><code>curl -sSL https://get.docker.com | sh
</code></pre>
<p>So, I put together a declarative <a href="https://github.com/asbjornenge/tezos-docker/blob/master/Dockerfile-ubuntu-arm-aarch64">Dockerfile</a> for Tezos that builds on arm. It's also available on <a href="https://hub.docker.com/r/asbjornenge/tezos-ubuntu-arm64">Docker Hub</a>.</p>
<p>It builds and works on the RPi4 😬🎉</p>
<h2 id="tezos">Tezos</h2>
<p>Next, let's make a few folders to keep the Tezos data:</p>
<pre><code>mkdir -p /data/tezos/mainnet/client
mkdir -p /data/tezos/mainnet/node/data
mkdir -p /data/tezos/mainnet/snapshots
</code></pre>
<p>I want to run a full node, so let's get a snapshot so we don't have to sync the full chain. Find snapshot's <a href="https://snapshots.tulip.tools/#/">here</a>.</p>
<pre><code>cd /data/tezos/mainnet/snapshots
wget &lt;snapshot_url&gt;
</code></pre>
<p>Next, let's load the snapshot:</p>
<pre><code>docker run --rm \
-v /data/tezos/mainnet/client:/var/run/tezos/client \
-v /data/tezos/mainnet/node:/var/run/tezos/node \
-v /data/tezos/mainnet/snapshots:/snapshots \
--entrypoint bash \
-it asbjornenge/tezos-ubuntu-arm:latest
</code></pre>
<p>Notice I modified the <code>entrypoint</code> to run a container without running the <code>entrypoint.sh</code> script - it does currently not support loading a snapshot.</p>
<p>Next, let's make sure the permissions of our folders are correct:</p>
<pre><code>conatiner&gt; cat /etc/passwd | grep tezos
</code></pre>
<p>Note the <code>uid</code> and <code>gid</code> of the tezos user. Change the permissions (outside the container):</p>
<pre><code>chown -R &lt;uid&gt;:&lt;gid&gt; /data/tezos/mainnet
</code></pre>
<p>Next, inside the container, load the snapshot:</p>
<pre><code>container&gt; tezos-node snapshot import /snapshots/mainnet.full --data-dir /var/run/tezos/node/data/
</code></pre>
<p>Before we can start the node we need to add the <code>alphanet_version</code> file (because of <a href="https://gitlab.com/tezos/tezos/issues/593">this</a> bug):</p>
<pre><code>echo 2018-06-30T16:07:32Z-betanet &gt; /var/run/tezos/node/alphanet_version
</code></pre>
<p>Now we are ready to start the node!</p>
<pre><code>docker run --rm \
-p 8732:8732 \
-v /data/tezos/mainnet/client:/var/run/tezos/client \
-v /data/tezos/mainnet/node:/var/run/tezos/node \
-v /data/tezos/mainnet/snapshots:/snapshots \
-it asbjornenge/tezos-ubuntu-arm:latest tezos-node
</code></pre>
<p>And thats it 😬🎉 Give it some time to start and you are running a full Tezos node on RPi4 🚀</p>
<p>Once it's started, you can verify the block level of your node:</p>
<pre><code>curl -s localhost:8732/chains/main/blocks/head | jq .header.level
</code></pre>
<h2 id="performance">Performance</h2>
<p>The CPU performance and MEM usage is well within bounds. The most important thing I wanted to perf test was the disk read/write speed.</p>
<p>I used a very basic read/write test to perf test the disk:</p>
<pre><code>cd /data

sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.49814 s, 195 MB/s

dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.51642 s, 708 MB/s
</code></pre>
<p>As we can see we get a read speed og around <code>200 MB/s</code> and write speed of around <code>700 MB/s</code>. I was hoping for much better perf. than this, since both the SSD and adapter support SATA 3 and USB 3.1 &quot;superspeed&quot;. But seems the limiting factor here might be that the RPi4 has a USB 3.0 port.</p>
<p>I was a bit encouraged by checking the read/write on my cloud nodes where both was only around <code>80 MB/s</code> 😉</p>
<p><strong>UPDATE 27.11.2019</strong></p>
<p>Seems the above was also somewhat related to running the OS from the MicroSD card. After moving to Ubuntu and running OS from SSD I got the following results:</p>
<pre><code>sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.7732 s, 225 MB/s

dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.819122 s, 1.3 GB/s
</code></pre>
<p>🎉🚀</p>
<h2 id="improvements">Improvements</h2>
<ul>
<li>Put OS on SSD ✅</li>
</ul>
<p>That poor MicroSD card has disk read speed around <code>11 MB/s</code> so I believe the reason the <code>tezos-node</code> RPC API sometimes feel a bit sluggish on the PI is caused by having the OS (and also Docker) running on that slow MicroSD card.</p>
<ul>
<li>UPS (powerbank battery) &amp; 4G modem</li>
</ul>
<p>I want to make the nodes as highly available as possible, so I want to add a batterybank and a 4G modem so it should be up no matter what (almost) 😉</p>
<ul>
<li>Signer with Ledger support</li>
</ul>
<p>I haven't tested running a signer on the PI yet - it's on my TODO list.</p>
<p>Hope this was useful ✨</p>
<p>enjoy.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Sliq]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://github.com/asbjornenge/sliq">Sliq</a> is a <a href="https://ocamlpro.github.io/techelson/">techelson</a> testrunner for <a href="http://www.liquidity-lang.org/">liquidity</a> programs. It allows you to easily write and test smart contracts for the <a href="https://tezos.com/">Tezos</a> blockchain.</p>
<p>Sliq is a JavaScript cli application wrapping a <a href="https://www.docker.com/">docker</a> <a href="https://hub.docker.com/r/asbjornenge/sliq">image</a> containing <code>liquidity</code> and <code>techelson</code>.</p>
<p>Another awesome thing is that <code>liquidity</code> supports <a href="https://reasonml.github.io/">Reason</a> syntax, so now you can write</p>]]></description><link>https://asbjornenge.com/sliq/</link><guid isPermaLink="false">5c940aea32ed060001de30bd</guid><category><![CDATA[tezos]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[smart contracts]]></category><category><![CDATA[liquidity]]></category><category><![CDATA[reason]]></category><category><![CDATA[reasonml]]></category><category><![CDATA[ocaml]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Thu, 21 Mar 2019 22:10:28 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1549145159-2f1242ce0975?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1549145159-2f1242ce0975?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Sliq"><p><a href="https://github.com/asbjornenge/sliq">Sliq</a> is a <a href="https://ocamlpro.github.io/techelson/">techelson</a> testrunner for <a href="http://www.liquidity-lang.org/">liquidity</a> programs. It allows you to easily write and test smart contracts for the <a href="https://tezos.com/">Tezos</a> blockchain.</p>
<p>Sliq is a JavaScript cli application wrapping a <a href="https://www.docker.com/">docker</a> <a href="https://hub.docker.com/r/asbjornenge/sliq">image</a> containing <code>liquidity</code> and <code>techelson</code>.</p>
<p>Another awesome thing is that <code>liquidity</code> supports <a href="https://reasonml.github.io/">Reason</a> syntax, so now you can write both your contracts and your tests in Reason 💖</p>
<h2 id="install">Install</h2>
<p>Make sure you have <a href="https://nodejs.org/en/">node.js</a>, <a href="https://www.npmjs.com/">npm</a> and <a href="https://www.docker.com/">docker</a> installed (and permissions to <code>docker run</code> without sudo).</p>
<pre><code>npm install -g sliq
</code></pre>
<h2 id="use">Use</h2>
<pre><code>sliq --contracts contracts/Demo.reliq --tests tests/

Sliq
  Contracts
    ./contracts/Demo.reliq
  Tests
    ./tests/Tests.reliq
Running tests...
===== ./tests/Tests.reliq =====
Running test `Sliq`

running test script...
   timestamp: 1970-01-01 00:00:00 +00:00

applying operation CREATE[uid:0] (@address[1], &quot;tz1YLtLqD1fWHthSVHPD116oYvsd4PTAHUoc&quot;, None, true, true, 10000000utz) 
                       {
                           storage unit ;
                           parameter unit ;
                           code ...;
                       }
   timestamp: 1970-01-01 00:00:00 +00:00
   live contracts: none
=&gt; live contracts: &lt;anonymous&gt; (0utz) address[2]
                   &lt;anonymous&gt; (10000000utz) address[1]

running test script...
   timestamp: 1970-01-01 00:00:00 +00:00

applying operation TRANSFER[uid:2] address[0]@Sliq -&gt; address[2] 5000000utz &quot;reason&quot;
   timestamp: 1970-01-01 00:00:00 +00:00
   live contracts: &lt;anonymous&gt; (0utz) address[2]
                   &lt;anonymous&gt; (10000000utz) address[1]

running TRANSFER[uid:2] address[0]@Sliq -&gt; address[2] 5000000utz &quot;reason&quot;
   timestamp: 1970-01-01 00:00:00 +00:00
=&gt; live contracts: &lt;anonymous&gt; (5000000utz) address[2]
                   &lt;anonymous&gt; (10000000utz) address[1]

running test script...
   timestamp: 1970-01-01 00:00:00 +00:00

Done running test `Sliq`
</code></pre>
<p>NB! The first time you run <code>sliq</code> it pulls the required docker image from docker hub. It's about 182MB, so it takes a little while.</p>
<p>enjoy!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Introducing Chronos 🙌]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>👀🔙👆</p>
<p>Chronos is a scheduled task runner built for <a href="https://www.docker.com/">docker</a> cloud environments.</p>
<p>The common way to manage scheduled tasks is still to use <a href="https://en.wikipedia.org/wiki/Cron">cron</a>. When managing a complex infrastructure this quickly becomes tedious. Keeping track of a bunch of cron jobs spread out across your servers - or even on a</p>]]></description><link>https://asbjornenge.com/announcing-chronos/</link><guid isPermaLink="false">5c8d0c7032ed060001de304a</guid><category><![CDATA[docker]]></category><category><![CDATA[infra]]></category><category><![CDATA[backup]]></category><category><![CDATA[chronos]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Sat, 16 Mar 2019 15:14:20 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1518607692857-bff9babd9d40?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1518607692857-bff9babd9d40?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Introducing Chronos 🙌"><p>👀🔙👆</p>
<p>Chronos is a scheduled task runner built for <a href="https://www.docker.com/">docker</a> cloud environments.</p>
<p>The common way to manage scheduled tasks is still to use <a href="https://en.wikipedia.org/wiki/Cron">cron</a>. When managing a complex infrastructure this quickly becomes tedious. Keeping track of a bunch of cron jobs spread out across your servers - or even on a dedicated server introduces state, is hard to manage, monitor etc.</p>
<p>So I decided to do something about it... 🙈</p>
<p>Introducing <a href="https://github.com/asbjornenge/chronos-app">Chronos</a> 🥳🎉</p>
<p><img src="https://github.com/asbjornenge/chronos-app/raw/master/screenshots/Chronos-1.png" alt="Introducing Chronos 🙌"></p>
<p>Chronos is a scheduled tasks runner for docker cloud environments.</p>
<p>In Chronos you can add tasks to run at specific times defined in cron syntax. Each task can have multiple steps. Steps are executed in order and stdout and stderr are stored for each execution.</p>
<p>Steps are executed on the <a href="https://github.com/asbjornenge/chronos-api">chronos-api</a> service, it's a very basic <a href="https://alpinelinux.org/">alpine-linux</a> container so it has very few tools for you to execute. It does however contain a <code>docker</code> cli - and the idea is for you to run scheduled tasks using containers.</p>
<p>State is stored in a <a href="https://www.postgresql.org/">postgresql</a> database.</p>
<p>Hope you find it useful 🙌</p>
<p>enjoy.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Tezos Baking Exporter]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Devops is currenly my bread and butter. For the different infra I manage I like to collect metrics using <a href="https://prometheus.io/">Prometheus</a>.</p>
<p>When it came time to design and host a <a href="https://tezos.com/">Tezos</a> baking infra I wanted to collect baking metrics using Prometheus. As I could not find any solution already, I made</p>]]></description><link>https://asbjornenge.com/tezos-baking-exporter/</link><guid isPermaLink="false">5c88fa89e05bc20001974e91</guid><category><![CDATA[tezos]]></category><category><![CDATA[baking]]></category><category><![CDATA[devops]]></category><category><![CDATA[monitoring]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Wed, 13 Mar 2019 12:52:28 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1517686469429-8bdb88b9f907?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1517686469429-8bdb88b9f907?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Tezos Baking Exporter"><p>Devops is currenly my bread and butter. For the different infra I manage I like to collect metrics using <a href="https://prometheus.io/">Prometheus</a>.</p>
<p>When it came time to design and host a <a href="https://tezos.com/">Tezos</a> baking infra I wanted to collect baking metrics using Prometheus. As I could not find any solution already, I made one 😬🙌🌈🚀</p>
<p><a href="https://github.com/asbjornenge/tezos-baking-exporter">https://github.com/asbjornenge/tezos-baking-exporter</a></p>
<p><img src="https://github.com/asbjornenge/tezos-baking-exporter/raw/master/screenshots/tezos-baking-exporter-grafana.png" alt="Tezos Baking Exporter"></p>
<p>I collect metrics from the <a href="http://tezos.gitlab.io/mainnet/api/rpc.html">RPC</a> API of my own Tezos node(s). It's quite easy to work with and figure out 👍</p>
<p>It's still a <strong>WIP</strong> though, I know a couple of issues with how it collected metrics and presents them - but it work well enough to make informed decisions about the state of your node and baker 👍</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Testing React components]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="http://facebook.github.io/react/">React</a> offers (imho) a paradigm shifting technology for client side web applications - or GUIs in general for that matter. If you haven't already; give it a try!</p>
<p>One of the things I really like about React is how it lends itself to <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a> and testing in general. Small, focused</p>]]></description><link>https://asbjornenge.com/testing-react-components/</link><guid isPermaLink="false">5c882544e05bc20001974e66</guid><category><![CDATA[react]]></category><category><![CDATA[tdd]]></category><category><![CDATA[testing]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Thu, 19 Jun 2014 21:31:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/logo-og.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://asbjornenge.com/content/images/2019/03/logo-og.png" alt="Testing React components"><p><a href="http://facebook.github.io/react/">React</a> offers (imho) a paradigm shifting technology for client side web applications - or GUIs in general for that matter. If you haven't already; give it a try!</p>
<p>One of the things I really like about React is how it lends itself to <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a> and testing in general. Small, focused components relying mostly on props (parameters) are easy to reason about and requires little mock.</p>
<p>The following are some of my experiences testing React components. Ready? Let's go!</p>
<img src="https://www.asbjornenge.com/img/expedition.gif" style="width: 710px" alt="Testing React components">
<h2 id="reacttestutils">React Test Utils</h2>
<p>React ships with very good test utilities. Unfortunately the documentation is somewhat hidden away on their website. Here is a <a href="http://facebook.github.io/react/docs/test-utils.html">link</a>.</p>
<pre><code>var React          = require('react')
var ReactAddons    = require('react/addons') // You also need to require the addons
var ReactTestUtils = React.addons.TestUtils  // &lt;- YEAH!
</code></pre>
<h2 id="usejsdom">Use jsdom</h2>
<p>For any kind of testing to be tolerable, TDD especially, efficient feedback loops are essential. Having to continously pass things off to a browser, or ever worse; multiple browser, is a pain. Luckily we have <a href="http://nodejs.org/">nodejs</a> &amp; <a href="https://github.com/tmpvar/jsdom">jsdom</a>. The React guys themselves use jsdom for testing.</p>
<p>I like to wrap up jsdom so that it is not required if a <code>document</code> already exists. That way the tests can run both in node and browsers.</p>
<p><strong>UPDATE:</strong> There is a <a href="https://www.npmjs.com/package/testdom">module</a> for that (now) :-)</p>
<pre><code>$ vi testdom.js
  module.exports = function(markup) {
      if (typeof document !== 'undefined') return
      var jsdom          = require(&quot;jsdom&quot;).jsdom
      global.document    = jsdom(markup || '')
      global.window      = document.createWindow()
      // ... add whatever browser globals your tests might need ...
  }

$ vi test/spec.js
  require('../testdom')('&lt;html&gt;&lt;body&gt;&lt;/body&gt;&lt;/html&gt;')
  console.log(document)
</code></pre>
<p>Later we will see how we can hook up our tests to a <a href="http://en.wikipedia.org/wiki/Continuous_integration">CI</a> tool for some sweet cross browser coverage.</p>
<h2 id="avoidjsx">Avoid JSX</h2>
<p>It's up to you, but using <a href="http://facebook.github.io/react/docs/jsx-in-depth.html">jsx</a> introduces an additional build step everywhere without adding much of a benefit. Using the React.DOM javascript API is really straight forward. It'll take you 2 minutes to figure out.</p>
<p><img src="https://media.giphy.com/media/rPacX3PmMzo52/giphy.gif" alt="Testing React components"></p>
<p><strong>UPDATE:</strong> I've since changed my mind and are now using jsx. Partly because React introduce some breaking changes that would not have been a PITA if I had<br>
been using jsx, and partly becuse it is a nice visual separation of logic and components.</p>
<h2 id="includeacommonrenderfunction">Include a common render function</h2>
<p>For each test you want a clean slate. Usually this means rendering the component again. It makes sense wrapping the render code into a function.</p>
<pre><code>var _ = require('lodash') // or similar
var defaultProps = {}

function render(newProps, callback) {
    var props = _.merge(defaultProps, newProps)
    return React.renderComponent(Component(props), document.body, function() {
        if (typeof callback === 'function') setTimeout(callback)
    })
}
</code></pre>
<p>I find that keeping a set of defaultProps around makes sense. Callers of render can pass their required props (<em>newProps</em>) and have that merged with defaultProps before rendering. Overwriting the defaults if they want. Since we are testing components in isolation we can usually just mount to <em>document.body</em>. <code>React.renderComponent</code> takes a callback that is called when the component has finished rendering. I found that pushing my <em>render</em>'s callback to the next tick of the eventloop (using <em>setTimeout</em>) resulted in a more stable test environment.</p>
<h2 id="cleanupaftereachtest">Clean up after each test</h2>
<p>If you try to render a React component into a DOM which already has react identifiers, React will merge with whatever is already there. Especially when testing the same component over and over your need to clean up your DOM state.</p>
<p>How to do this depends on your test framework. Here is what I do in <a href="http://visionmedia.github.io/mocha/">mocha</a> (tdd interface):</p>
<pre><code>describe('My Component', function() {

    afterEach(function(done) {
        React.unmountComponentAtNode(document.body) // Assuming mounted to document.body
        document.body.innerHTML = &quot;&quot;                // Just to be sure :-P
        setTimeout(done)
    })

    ...tests...
})
</code></pre>
<p>We use <code>React.unmountComponentAtNode</code> to unmount the component. Just to be safe we also reset body's innerHTML. I found once again that pushing the callback (<em>done</em>) to the next tick of the eventloop (using <em>setTimeout</em>) created a more stable test suite.</p>
<h2 id="querythedom">Query the DOM</h2>
<p>You can query the DOM directly using the tool of your choice, or you can use the <strong>ReactTestUtils</strong> to query React components.</p>
<pre><code>it('should render an input', function(done) {
    var _tree = render({}, function() {
        var __input = document.querySelectorAll('input')
        var _input  = ReactTestUtils.findRenderedDOMComponentWithTag(_tree, 'input')
        assert(...)
    })
})
</code></pre>
<p>As you might have noticed the <code>findRenderedDOMComponentWithTag</code> (and most other functions of <strong>ReactTestUtils</strong>) require a ReactComponent parent/tree to query. Luckily we designed our <em><strong>render</strong></em> function to return the top level component.</p>
<h2 id="simulateevents">Simulate events</h2>
<p>The <strong>ReactTestUtils</strong> also let's you simulate events. This is very useful!</p>
<pre><code>it('should do something when I click mySpecialButton', function(done) {
    var _tree = render({}, function() {
        var _button = ReactTestUtils.findRenderedDOMComponentWithClass(_tree, 'mySpecialButton')
        ReactTestUtils.Simulate.click(_button)
        assert(...)
    })
})
</code></pre>
<p>For more about the capabilities of <strong>ReactTestUtils</strong> check out the <a href="http://facebook.github.io/react/docs/test-utils.html">docs</a>.</p>
<h2 id="fakingxmlhttprequests">Faking XMLHttpRequests</h2>
<p>(<em>Not really React specific, but I'll add a note about it anyway.</em>)</p>
<p>Need to fake XMLHTTPRequests? There is a <a href="https://www.npmjs.org/package/fakexmlhttprequest">module</a> for that!</p>
<pre><code>var FakeXMLHTTPRequests = require('fakexmlhttprequest')

var requests   = []
XMLHttpRequest = function() { 
    var r =  new fakeXMLHttpRequest(arguments)
    requests.push(r)
    return r
}

describe('My component', function() {

    afterEach(function() {
        requests = [] // &lt;- Reset request pool after each test
        ...
    })
    
    it('gonna get some data over the wire', function(done) {
        var onDataReceived = function(data) { assert(...); done() }
        render({ onDataReceived : onDataReceived }, function() {
            assert(requests.length &gt; 0)
            requests[0].respond(200, { &quot;Content-Type&quot;: &quot;application/json&quot; }, JSON.stringify({...}))
        })
    })

})
</code></pre>
<h2 id="runningtest">Running test</h2>
<p>I use mocha solely for the <a href="https://www.nyan.cat/">nyancat</a> reporter.</p>
<pre><code>npm install -g mocha
</code></pre>
<p>I also find it useful to add my test command to <code>package.json</code> so that I can run my tests consistently with the same command across projects.</p>
<pre><code>$ vi package.json
    ...
    &quot;scripts&quot;: {
        &quot;test&quot;: &quot;mocha -R nyan -w --check-leaks&quot;,
    },
    ...

$ npm test
</code></pre>
<img src="https://www.asbjornenge.com/img/nyancat.gif" style="width: 250px" alt="Testing React components">
<h2 id="testling">Testling</h2>
<p>Running the tests in node is convenient and fast, but it is <strong>NOT THE SAME</strong> as running them in actual browser. So, we need to hook up some actual browser testing too. <a href="https://ci.testling.com/">Testling</a> is a great alternative and free for open source projects. They have great <a href="https://ci.testling.com/guide/quick_start">documentation</a> and even a special little guide for using <a href="https://ci.testling.com/guide/mocha">mocha</a>. Plus, you'll get sweet badges:</p>
<p><a href="https://ci.testling.com/asbjornenge/nanoxhr"><img src="https://ci.testling.com/asbjornenge/nanodom.png" alt="Testing React components"><br>
</a></p>
<p>There is one little trick I wanted to add though. Testling users <a href="http://browserify.org/">browserify</a> to create a browser compatible bundle of your javascripts. Unfortunately jsdom is not compatible with browserify, so we have to tell testling to ignore it.</p>
<p>In your <code>package.json</code> add a <em>browser</em> field and add tell browserify to ignore <em>jsdom</em>.</p>
<pre><code>$ vi package.json

...
&quot;browser&quot; : {
    &quot;jsdom&quot; : false
},
...
</code></pre>
<p>Since we, in our jsdom wrapper above, only try to require jsdom if no document exists; the browser will never reach that code and we are good. The tests will use the browser's DOM.</p>
<img src="https://i.giphy.com/5iXTLFjce2qcw.gif" style="width: 200px" alt="Testing React components">
<p>Now go get some test coverage for your React components!</p>
<h2 id="credits">Credits</h2>
<p>♥ to the React guys.<br>
And the nyancat.<br>
And coffee.<br>
And tests.<br>
And <a href="https://giphy.com/">gifs</a>.</p>
<p>enjoy!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Dock in the clouds]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Tired of having virtualbox and a thousand docker containers dragging your macbook air down in the mud? Put your docker host in the clouds!</p>
<p>In this post we will create a docker host running in the cloud and hook that up to our local development environment using a SSH based</p>]]></description><link>https://asbjornenge.com/dock-in-the-clouds/</link><guid isPermaLink="false">5c882250e05bc20001974e4c</guid><category><![CDATA[docker]]></category><category><![CDATA[networking]]></category><category><![CDATA[linux]]></category><category><![CDATA[vpn]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Tue, 10 Jun 2014 21:19:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/docker-cloud-twitter-card.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://asbjornenge.com/content/images/2019/03/docker-cloud-twitter-card.png" alt="Dock in the clouds"><p>Tired of having virtualbox and a thousand docker containers dragging your macbook air down in the mud? Put your docker host in the clouds!</p>
<p>In this post we will create a docker host running in the cloud and hook that up to our local development environment using a SSH based &quot;VPN&quot;.</p>
<p>Her is a diagram.</p>
<pre><code>    +---------------+               SSH                +----------------+
    |  development  | tun0         tunnel         tun0 |  docker host   |
    |               | &lt;-------------------------------&gt;|                |
    |   (client)    | 10.0.0.1                10.0.0.2 | (cloud_server) |
    +-------+-------+     point to point connection    +-------+--------+
       eth0 |                                                  | eth0
192.168.0.2 |                                                  | 10.0.0.10
            |                    _  _                          | docker0
            |                   ( `   )_                       | 172.17.42.1
            -----------------  (    )    `)  -------------------
                              (_   (_ .  _) _)
                                  INTERNET
</code></pre>
<h2 id="thecloudserver">The Cloud Server</h2>
<p>Pick a cloud, any cloud.</p>
<p>We'll stick with an <strong>Ubuntu 12.04 LTS</strong> on <a href="http://aws.amazon.com/">AWS</a> EC2, but feel free to choose <a href="https://www.digitalocean.com/">digitalocean</a> or any other supplier and/or linux distribution. Create your VPS (follow provider instructions) and log in.</p>
<pre><code>$ ssh &lt;cloud_server_ip&gt;
</code></pre>
<h3 id="preparedocker">Prepare Docker</h3>
<p>Follow the appropriate <a href="http://docs.docker.io/en/latest/installation/">instructions</a> to install docker. By default the docker daemon binds to a unix socket only. To be able to communicate with docker over the network, we need to bind to a network interface.</p>
<pre><code>$ sudo vi /etc/init/docker.conf
	&quot;$DOCKER&quot; -d $DOCKER_OPTS -H unix:///var/run/docker.sock -H tcp://0.0.0.0:4243
</code></pre>
<p><strong>WARNING!</strong>  Binding to <code>0.0.0.0</code> will expose the docker host on all network interfaces on the server!! That might not be what you want. If you only want to talk to the docker host over the tunnel, you can bind to <code>10.0.0.2</code>. Let's keep it exposed while we are testing, but remember to go back and fix that later.</p>
<h3 id="preparessh">Prepare SSH</h3>
<p>To make this work, we need to permit root login over ssh. Now, that might seem like a security hazard. But, we are only going to allow login using keys (not passwords), and later on we are going to limit the access of root over ssh to only run a specific command.</p>
<pre><code>$ sudo vi /etc/ssh/sshd_config
	PermitRootLogin yes
	PermitTunnel yes
	PasswordAuthentication no # &lt;- optional, but recommended!
$ sudo service ssh restart
</code></pre>
<p><strong>Friendly tip:</strong> <em>When modifying sshd_config and restarting the ssh service; test your new configuration on a different session/terminal, keeping the one you already have open, just in case you messed up something.</em></p>
<p>We also need to remove any initial scripting under root's <em><strong>autorized_keys</strong></em> added to prevent login as root.</p>
<pre><code>$ sudo vi /root/.ssh/authorized_keys
</code></pre>
<p>Remove anything resembling the following:</p>
<pre><code>no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command=&quot;echo 'Please login as the user \&quot;ubuntu\&quot; rather than the user \&quot;root\&quot;.';echo;sleep 10&quot;
</code></pre>
<p>...so that your file starts with <code>rsa-ssh &lt;long_key&gt;</code>.</p>
<h3 id="permitforwarding">Permit forwarding</h3>
<pre><code>$ sudo su -c &quot;echo 1 &gt; /proc/sys/net/ipv4/ip_forward&quot;
</code></pre>
<h2 id="theclient">The Client</h2>
<p>If your on OSX (like me); install <a href="http://tuntaposx.sourceforge.net/">tuntap</a>.</p>
<p>Grab the docker client!</p>
<pre><code>$ curl https://get.docker.io/builds/Darwin/x86_64/docker-latest -o docker
$ chmod +x docker
$ sudo cp docker /usr/local/bin/
</code></pre>
<p>Thats it!</p>
<h2 id="connecting">Connecting</h2>
<p>Connection to your docker host is as simple as:</p>
<pre><code>(client) $ sudo ssh -w 0:0 -i key.pem root@&lt;cloud_server_ip&gt;
</code></pre>
<p>Passing the <code>-w</code> flag to <code>ssh</code> will have ssh create a tunnel device on each end of the connection. The <code>0:0</code> indicates which tunnel device at each end. Since we have specified 0 at each end, both the device on the server and the client will be <code>tun0</code>. Now we need to route some traffic so we can talk to docker on the server.</p>
<h2 id="tunnelsroutesdocker">Tunnels &amp; Routes &amp; Docker</h2>
<p>Setup the tunnel devices:</p>
<pre><code>(server) $ sudo ifconfig tun0 10.0.0.2 pointopoint 10.0.0.1
(client) $ sudo ifconfig tun0 10.0.0.1 10.0.0.2
(client) $ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=58.643 ms
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=58.410 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=58.586 ms
^ woohoo!
</code></pre>
<p>Setup routing:</p>
<pre><code>(client) $ sudo route -n add -net 10.0.0.0 10.0.0.2
(client) $ sudo route -n add -net 172.0.0.0 10.0.0.2
</code></pre>
<p>Set the docker host:</p>
<pre><code>(client) $ export DOCKER_HOST=tcp://172.17.42.1:4243
</code></pre>
<p>Et voilà:</p>
<pre><code>(client) $ docker ps
CONTAINER ID        IMAGE                           COMMAND                CREATED             STATUS              PORTS
</code></pre>
<p><img src="https://media.giphy.com/media/1MTLxzwvOnvmE/giphy.gif" alt="Dock in the clouds"></p>
<h2 id="addtionalsetupsecurity">Addtional setup &amp; security</h2>
<p>Now, all this is quite a bit to remember, so I definately recommend scripting parts of the setup. One neat thing you can do that will both simplify the process and strengthen security is to only allow root to run a specific command over ssh, and have that command be opening the tunnel.</p>
<p>First, add the <code>tun0</code> interface to <code>/etc/network/interfaces</code></p>
<pre><code>$ sudo vi /etc/network/interfaces
    iface tun0 inet static
	   address 10.0.0.2
	   pointopoint 10.0.0.1
	   netmask 255.255.255.0
	   # Forward traffic into server side network
	   iptables -t nat -A POSTROUTING --source 10.0.0.2 -j SNAT --to-source 10.0.0.10
</code></pre>
<p>Add a command to root's <code>authorized_keys</code> (first thing in the file) to bring up the interface on connection.</p>
<pre><code>$ sudo vi /root/.ssh/authorized_keys
    tunnel=&quot;0&quot;,command=&quot;/sbin/ifdown tun0;/sbin/ifup tun0&quot; ssh-rsa ....
</code></pre>
<p>Only allow commands for root over ssh.</p>
<pre><code>$ sudo vi /etc/ssh/sshd_config
    PermitRootLogin forced-commands-only
$ sudo service ssh restart
</code></pre>
<p>Now, whenever you ssh in as root the server will try to bring up the tunnel interface.</p>
<h2 id="dns">DNS</h2>
<p>If you want simple dns based service discovery for you containers over you new cloud bridge, apply the same tools and techniques discussed in my previous article <a href="https://asbjornenge.com/wwc/vagrant_skydocking.html">Vagrant Skydocking</a> to this cloud bridge. You will be pinging <code>redis.staging.yourapp</code> in no time.</p>
<h2 id="closingthoughts">Closing thoughts</h2>
<p>I have been applying this technique for connecting to different VPCs I manage on amazon. Once scripted it's really nice being able to connect to a specific environment with one command and be hands on the docker host(s) and the containers in that environment.</p>
<p>Originally I had been hoping to also use a VPC for development purposes. Unfortunately the latency from my location to the datacenter gets annoying when trying to get efficient feedback loops. This is however not a fault of this approach, but the distance from me to the datacenter. Your milage may vary. Setting up a development host closer to home did the trick.</p>
<p>Time to dance!</p>
<p><img src="https://media.giphy.com/media/KMGVJZVQMmNvG/giphy.gif" alt="Dock in the clouds"></p>
<h2 id="credits">Credits</h2>
<p>♥ goes out to ssh.<br>
And all the amazing unix hackers out there!<br>
Including the authors of <a href="http://www.debian-administration.org/article/539/Setting_up_a_Layer_3_tunneling_VPN_with_using_OpenSSH">this</a> and <a href="http://wouter.horre.be/doc/vpn-over-ssh">that</a>.<br>
Gifs from <a href="http://gifs.joelglovier.com/">here</a>.<br>
Thank you internet of folks!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Tiny node containers]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>My favorite language at the moment is javascipt. It's fun &amp; functional!</p>
<p>Since I'm also working quite a bit with <a href="http://docker.io">docker</a>, I've been frustrated with the size of nodejs docker images. A typical node container holds <code>node</code>, <code>npm</code> and all your <code>dependencies</code>. Add a few <code>apt-get</code>'s and your quickly</p>]]></description><link>https://asbjornenge.com/tiny-node-containers/</link><guid isPermaLink="false">5c882076e05bc20001974e37</guid><category><![CDATA[docker]]></category><category><![CDATA[node.js]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Tue, 11 Mar 2014 21:10:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/nodejs-3.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://asbjornenge.com/content/images/2019/03/nodejs-3.jpg" alt="Tiny node containers"><p>My favorite language at the moment is javascipt. It's fun &amp; functional!</p>
<p>Since I'm also working quite a bit with <a href="http://docker.io">docker</a>, I've been frustrated with the size of nodejs docker images. A typical node container holds <code>node</code>, <code>npm</code> and all your <code>dependencies</code>. Add a few <code>apt-get</code>'s and your quickly looking at &gt; <strong>500 MB</strong>.</p>
<p>I even started hacking some <a href="http://golang.org/">Go</a> solely for the ability to compile to a single binary.</p>
<p>Until I found <a href="https://github.com/crcn/nexe">nexe</a>...</p>
<img src="https://media.giphy.com/media/a2bVaOvWsRrgs/giphy.gif" alt="Tiny node containers">  
<font color="#999">*I can haz javascript aaaand binary???*</font>  
<h2 id="buildingwithnexe">Building with nexe</h2>
<p>Nexe will compile your node app into a single executable binary. No joke! Have a <a href="https://github.com/crcn/nexe">look</a>!</p>
<p>Since we are now compiling, we need to think about things like <em>compile target</em>. Containers run linux. My desktop runs Darwin. A binary compiled on/for Darwin won't be able run inside a container. So, I made a <a href="https://index.docker.io/u/asbjornenge/nexe-docker/">container</a> for compiling apps with nexe.</p>
<pre><code>docker run -v $(pwd):/app -w /app asbjornenge/nexe-docker -i index.js -o app
</code></pre>
<h3 id="weirdbugs">Weird bugs</h3>
<p>Granted, nexe is a bit flakey atm. I found two main bugs that I had to work around:</p>
<p>A default package.json somehow messes up the executable.<br>
<em><strong>Workaround:</strong></em> <em>I added a build script that will move package.json to pkg.json, build, then move it back.</em></p>
<p>When passing arguments to a comiled binary, there must exist a first argument.<br>
<em><strong>Workaround:</strong></em> <em>Just pass a random first argument.</em></p>
<h2 id="container">Container</h2>
<p>When distributing, we can use the simplest container possible, and just add the binary.</p>
<pre><code>FROM debian:jessie
ADD app /usr/bin/app
ENTRYPOINT [&quot;app&quot;]
</code></pre>
<h2 id="diff">Diff</h2>
<p>I used this approach to build <a href="https://github.com/asbjornenge/skylink">skylink</a>, check out the difference!</p>
<pre><code>      |   normal  |  nexe
 ---------------------------
 size |  640.3 MB | 133.6 MB
</code></pre>
<h2 id="credits">Credits</h2>
<p>♥ to the <a href="https://github.com/crcn/nexe">nexe</a> folks!<br>
Gif from <a href="https://github.com/jglovier/gifs">here</a>.<br>
Thanks!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Vagrant skydocking]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I've been working quite a bit with <a href="http://docker.io">docker</a> lately. If you haven't yet checked it out, it's about time. Docker is already popping paradigms.</p>
<h2 id="abridgeovervagrantwater">A bridge over vagrant water</h2>
<p>Since I'm on OSX I'm running my docker host on Virtualbox via <a href="http://www.vagrantup.com/">Vagrant</a>.</p>
<p>Instead of having to forward ports and using</p>]]></description><link>https://asbjornenge.com/vagrant-skydocking/</link><guid isPermaLink="false">5c881d66e05bc20001974e18</guid><category><![CDATA[Tech]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Wed, 29 Jan 2014 20:58:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1455487890814-f11ab4eaec4b?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1455487890814-f11ab4eaec4b?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Vagrant skydocking"><p>I've been working quite a bit with <a href="http://docker.io">docker</a> lately. If you haven't yet checked it out, it's about time. Docker is already popping paradigms.</p>
<h2 id="abridgeovervagrantwater">A bridge over vagrant water</h2>
<p>Since I'm on OSX I'm running my docker host on Virtualbox via <a href="http://www.vagrantup.com/">Vagrant</a>.</p>
<p>Instead of having to forward ports and using lots of -p args when spawning containers, I wanted to bridge my host and the vm's docker interface, so that I could ping my containers from my OSX terminal.</p>
<p>Create a <strong>private_network</strong> in your Vagrantfile. I'm picking an ip on a different subnet than the docker0 interface to avoid any potential conflicts.</p>
<pre><code>Vagrant::VERSION &gt;= &quot;1.1.0&quot; and Vagrant.configure(&quot;2&quot;) do |config|
	config.vm.network &quot;private_network&quot;, ip: &quot;10.2.0.10&quot;, netmask: &quot;255.255.0.0&quot;
	config.vm.provider :virtualbox do |vb|
		vb.customize [&quot;modifyvm&quot;, :id, &quot;--nicpromisc2&quot;, &quot;allow-all&quot;]
	end
end
</code></pre>
<p>The <em>vb.customize</em> is to allow forwarding packets for the bridge interface. The <em>--nicpromisc2</em> translates to <em>Promiscuous mode for nic2</em>, where nic2 -&gt; eth1. So --nocpromisc3 would change that setting for eth2, etc.</p>
<p>After reloading vagrant we need create a <strong>route</strong> on the host. Basically, any traffic trying to reach the docker subnet (172.17.0.0) should be routed to our new interface inside the vm (10.2.0.10).</p>
<pre><code># OSX
$&gt; sudo route -n add -net 172.17.0.0 10.2.0.10
# Linux (untested)
$&gt; sudo route -net 172.17.0.0 netmask 255.255.0.0 gw 10.2.0.10
</code></pre>
<p>You now have a <strong>bridge</strong> from your host to your docker network!!</p>
<pre><code>$&gt; IP=`docker inspect -format='{{.NetworkSettings.IPAddress}}' skydns`
$&gt; ping $IP
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_req=1 ttl=64 time=0.232 ms
64 bytes from 172.17.0.3: icmp_req=2 ttl=64 time=0.103 ms
^C
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.103/0.167/0.232/0.065 ms
</code></pre>
<p><img src="https://raw2.github.com/jglovier/gifs/gh-pages/aha/aha.gif" alt="Vagrant skydocking"></p>
<h2 id="skydock">Skydock</h2>
<p>Docker is all about distributed systems; packing single components inside containers and have them talk to eachother. One of the painpoints when shattering your monolith is linking all those loose components together.</p>
<p>(Docker provides a -link parameter for linking containers. But this quickly falls short in complex scenarios.)</p>
<p>I was just about to dig into service discrovery solutions like <a href="https://github.com/coreos/etcd">etcd</a> or similar, when <a href="http://crosbymichael.com/">Michael Crosby</a> posted his <a href="https://github.com/crosbymichael/skydock">skydock</a> (<a href="https://www.youtube.com/watch?v=Nw42q1ofrV0">video</a>). It's brilliant! It let's you discover your services via <strong>DNS</strong>. I won't go into setting up skydock, just check out the awesome <a href="https://github.com/crosbymichael/skydock">tutorial</a> by Michael.</p>
<p>So, with skydock my containers can discover eachother via DNS names like <strong>myservice.env.domain.com</strong>. Awesome! But, with my network bridge set up, so can my host!! No? That would be really nice for development...</p>
<pre><code>$&gt; curl elasticsearch.dev.domain.com:9200
curl: (6) Could not resolve host: elasticsearch.dev.domain.com
</code></pre>
<p>﴾͡๏̯͡๏﴿ ... Ah, we need to hook up skydns as a nameserver. This is where I stray a little from Michael's skydock tutorial. I had some issues binding to the docker0 interface (docker v0.7.6), so instead I'm using the skydns container as the nameserver directly (PS! this requires passing a -dns &lt;skydns_ip&gt; arg to each new container). Either way, we have to edit resolv.conf.</p>
<pre><code>$&gt; sudo vi /etc/resolv.conf
   # nameserver 172.17.42.1 &lt;- skydock tutorial
   nameserver 172.17.0.3 # &lt;- skydns container ip
$&gt; dig elasticsearch.dev.domain.com
;; ANSWER SECTION:
elasticsearch.dev.domain.com.	20	IN	A	172.17.0.7
</code></pre>
<p>✌(-‿-)✌ ... Hoplah! Now, hopefully that will be it for you and you're all set to curl containers from the comforts of your host terminal! I however, had one more issue to solve...</p>
<pre><code>$&gt; curl elasticsearch.dev.domain.com:9200
curl: (6) Could not resolve host: elasticsearch.dev.domain.com # w00000000t???
</code></pre>
<h3 id="osxweirdness">OSX weirdness</h3>
<p>Apparently OSX is rather weird in how it handles DNS. <strong>dig</strong>, <strong>host</strong>, etc. can resolve the host just fine, but other tools like <strong>curl</strong> and even <strong>ping</strong> does not obey resolv.conf. I eventually stumbled across the issue and found <a href="https://github.com/michthom/AlwaysAppendSearchDomains">this</a> script that apparently solves it for most people. It didn't help. Eventually I added the DNS server via OSX <a href="http://support.apple.com/kb/PH14159">network preferences</a>, and that did the trick.</p>
<pre><code>$&gt; curl elasticsearch.dev.domain.com:9200
{
	&quot;ok&quot; : true,
	&quot;status&quot; : 200,
	&quot;name&quot; : &quot;Damian, Margo&quot;,
	&quot;version&quot; : {
		&quot;number&quot; : &quot;1.0.0.Beta2&quot;,
		&quot;build_hash&quot; : &quot;296cfbe390dc51bb00c00ba48ad0c8a9efabcfe9&quot;,
		&quot;build_timestamp&quot; : &quot;2013-12-02T15:46:27Z&quot;,
		&quot;build_snapshot&quot; : false,
		&quot;lucene_version&quot; : &quot;4.6&quot;
	},
	&quot;tagline&quot; : &quot;You Know, for Search&quot;
}
</code></pre>
<p><img src="https://i0.kym-cdn.com/profiles/icons/big/000/055/347/1313845263510.gif" alt="Vagrant skydocking"></p>
<p>I'm now a ᕙ༼ຈل͜ຈ༽ᕗ curl’er of containers!!</p>
<h2 id="credits">Credits</h2>
<p><a href="http://docker.io">Docker</a>, <a href="https://github.com/crosbymichael/skydock">Skydock</a> and <a href="https://github.com/skynetservices/skydns">Skydns</a> all deserve a big fat ♥.<br>
I followed <a href="https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/">this</a> guide by <a href="https://twitter.com/drivebytesting">Lukas Pustina</a> to set up my vagrant networking.<br>
Gifs from <a href="https://github.com/jglovier/gifs">here</a> and faces from <a href="https://github.com/maxogden/cool-ascii-faces">there</a>.<br>
Thanks!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Out of sorts]]></title><description><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>My first real fight with fonts.</p>
</blockquote>
<h4 id="disclaimer">Disclaimer</h4>
<p>Before I even start this I should probably state that this adventure leads into unfamiliar terrain, and over half my findings are probably half-witted nonsense. There. I should probably start all my blogposts like that.</p>
<h2 id="introduction">Introduction</h2>
<p>Fonts are important. Most of what we</p>]]></description><link>https://asbjornenge.com/out-of-sorts/</link><guid isPermaLink="false">5c881b33e05bc20001974e09</guid><category><![CDATA[Tech]]></category><category><![CDATA[font]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Sat, 30 Mar 2013 20:48:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/font-example-1440x720.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://asbjornenge.com/content/images/2019/03/font-example-1440x720.jpg" alt="Out of sorts"><p>My first real fight with fonts.</p>
</blockquote>
<h4 id="disclaimer">Disclaimer</h4>
<p>Before I even start this I should probably state that this adventure leads into unfamiliar terrain, and over half my findings are probably half-witted nonsense. There. I should probably start all my blogposts like that.</p>
<h2 id="introduction">Introduction</h2>
<p>Fonts are important. Most of what we see on our screens is text in some form, or typeface, to dive right into the syntax.</p>
<p>Starting out on my current font adventure I was quite shocked by how little I knew about the font world. I had been developing websites and apps full of text for years, but hardly knew what a baseline was. To some extent that is a good thing, it had just worked. On the other hand, it is a level of control over my design I had been completely ingorant about.</p>
<h2 id="thedesignbullet">The design bullet</h2>
<p>The current problem I was facing seemed simple enough; allow a line of text to include a bullet. Easy as pie.</p>
<pre><code>&lt;span style=&quot;display:list-item&quot;&gt;Some Text&lt;/span&gt;
</code></pre>
<p>Turned out it wasn't quite so easy. These bullets were part of our clients design manual, but they were not the same as the bullet glyph of the font. Modifying the font was also out of the question because of licensing.</p>
<p>But, there was a clear definition; the bullet was a square of <em>height</em> and <em>width</em> <em>x</em> relative the <em>font-size</em>, <strong>vertically aligned</strong> (centered) with the fonts [<strong>x-height</strong>](<a href="http://en.wikipedia.org/wiki/Baseline_(typography)">http://en.wikipedia.org/wiki/Baseline_(typography)</a>).</p>
<p>Calculating the x-height of a target element is easy enough using css's <a href="http://www.w3.org/Style/Examples/007/units#units"><strong>ex</strong></a> unit.</p>
<pre><code>$('&lt;div style=&quot;width:1ex&quot;&gt;&lt;/div&gt;').appendTo(target)[0].offsetWidth
</code></pre>
<p>But the x-height itself was of little use. To vertically align my bullet with the x-height, I needed to know the margin, bottom or top, of the <strong>baseline</strong> or <strong>median</strong>; I needed more metrics.</p>
<p>Alright, easy enough. Let's see… <em>&quot;javascript font metrics&quot;</em>. Uhm…</p>
<h4 id="thebadnews">The bad news</h4>
<p>There is no built-in, easy, standard way of extracting the metrics of a font.</p>
<h4 id="thegoodnews">The good news</h4>
<p>It's possible to calculate! AND, there is a great <a href="https://github.com/Pomax/Font.js">library</a> that will do most of the heavy lifting for you! We'll get to that.</p>
<h2 id="calculation">Calculation</h2>
<p>To calculate a fonts vertical metrics, there are two approaches as far as I can tell.</p>
<p><strong>1. Measuring dom elements</strong></p>
<p>The first <a href="http://www.brunildo.org/test/xheight.pl">approach</a> is to use a bunch of dom elements with specific font-related metrics (1em, 1ex, etc.) and measure these in px (offsetWidth) at different levels and at different font-size's.</p>
<p>The approach seems to work quite well for the calculation part. Sturdy across browsers and fonts. For the actual positioning there were other icebergs floating around.</p>
<p>NB! The solution is a possible performance drain if used unwisely - measuring offsetWidth might cause unwanted reflow (repaint of your dom elements).</p>
<p><strong>2. Canvas</strong></p>
<p>The second <a href="http://processingjs.nihongoresources.com/FontMetrics/">approach</a> is using the canvas element. The 2d context of a canvas has <em>font</em>, <em>fillText</em> and <em>measureText</em> functions. Unfortunately <a href="http://www.w3.org/TR/2012/WD-2dcontext-20120329/#dom-context-2d-measuretext"><em>measureText</em></a> only deals with the <a href="http://www.w3.org/TR/2012/WD-2dcontext-20120329/#textmetrics">width</a> metric, but that seems to be about to <a href="http://www.w3.org/TR/2dcontext/#textmetrics">change</a> (!!). For now though, the approach is to dump and analyze the raw pixel data and figure out how many pixels are used vertically to draw different letters of the font.</p>
<p>This approach also works perfectly for the calculation part, and thanks to the awesome <a href="http://processingjs.nihongoresources.com/FontMetrics/fontmetrics.js">fontmetrics.js</a> it's easy.</p>
<p>But again, for the actual positioning, I was soon stuck in a pitch black room (next to a tiny, grey, startling little cat with diarrhea. Sitting on a matressless, iron-sprung bed with its huge eyes mewing at me. Meow. Smoking as well, probably. And then some terrible guy the colour of an aubergine round the corner holding a mug of beef tea and wearing a string vest going “meew. Fuckn brrr aaah” ~ Dylan Moran).</p>
<h2 id="fontface">@font-face</h2>
<p>The days of web typography is upon us. We are no longer limited to a handful of built-in fonts. Using technologies like <a href="http://sixrevisions.com/css/font-face-guide/">@font-face</a> we can embed &quot;any&quot; font on our page and have it render &quot;beautifully&quot; on the client's browser.</p>
<p>There are however quite a few <a href="http://www.fontsquirrel.com/blog/2010/11/troubleshooting-font-face-problems">pitfalls</a> &amp; <a href="http://www.owlfolio.org/htmletc/legibility-of-embedded-web-fonts/">legibility</a> issues.</p>
<h3 id="rendering">Rendering</h3>
<p>The one that hit me hard in the face is the fact that different browsers, and even the same browsers on different operating systems, deal very differently with how they render fonts. Even different versions of the same operating system will sometimes render fonts very differently.</p>
<p><em>At typical body-text sizes, the computer has to draw each letter using only 15 or so pixels in each direction. It’s not possible to draw each letter exactly as the typographer intended, and keep all the lines crisp and smooth, with that few pixels. Windows, OSX, and Linux all resolve this dilemma differently: to oversimplify a bit, OSX tries harder to preserve the font shapes, Windows tries harder to make the lines sharp, and Linux tries to do both at once and winds up achieving neither.<br>
~ Zachary Weinberg</em></p>
<p>Sometimes the font won't even render inside it's bounding box! (!!!!) For my current problem, that makes any font metric calculation futile. Turns out, this library I've been mumbling about had a solution for even this.</p>
<h3 id="timing">Timing</h3>
<p>Another issue with embedded fonts is knowing when the font is loaded. If you try to measure prematurely you will end up measuring the fallback font, and thats no good.</p>
<p>The only viable solution I have come across is using a &quot;dummy&quot; fallback font that will encode a character as a zero-width unit. Putting that in a paragraph and polling for a real width. It's not a great solution but it works.</p>
<h2 id="fontjs">Font.js</h2>
<p>Fortunately someone has already thread this path for us.<br>
<a href="http://pomax.nihongoresources.com/pages/Font.js/">Font.js</a> adds a <strong>Font</strong> object to your javascript toolbelt. It's designed to behave similar to the <strong>Image</strong> object.</p>
<pre><code>var font = new Font();
font.onload  = function() {}
font.onerror = function() {}
font.src = &quot;http://your.domain.com/fonts/font.otf&quot;
</code></pre>
<p>It handles <strong>timing</strong> issue using the detailed solution above, and will call your <em>onload</em> function when the font is available. It gives you <strong>metrics</strong>.</p>
<pre><code>font.metrics -&gt; {}
font.measureText(string, size) -&gt; {}
</code></pre>
<p>They even handle the <strong>rendering</strong> issue (to some extent).</p>
<p><em>Font.js actually draws text offscreen, does a scanline pass to find out what the &quot;real&quot; ascent and descent is, and then sets height to ascent + 1 + descent (&quot;1&quot; for the baseline itself). This generally works quite well, but will lead to incorrect heights for fonts that don't implement the Latin blocks =)<br>
~ Michiel Kamermans</em></p>
<p>One important thing to note is that the fonts are loaded using <strong>XMLHttpRequest</strong>'s. This is important since it is the only way to get the font data so it can be inspected and manipulated. But it does mean you have to deal with hosting your own fonts or setting up <a href="http://en.wikipedia.org/wiki/Cross-origin_resource_sharing">CORS</a> to avoid <em>Access-Control-Allow-Origin</em> issues.</p>
<p>Font.js is a great library for solving most of the current headaches related to fonts.</p>
<p>Grab it from the github <a href="https://github.com/Pomax/Font.js">repo</a> or via <a href="http://twitter.github.com/bower/">bower</a>.</p>
<pre><code>bower install Font.js
</code></pre>
<h2 id="resources">Resources</h2>
<p><a href="http://pomax.nihongoresources.com/pages/Font.js/">http://pomax.nihongoresources.com/pages/Font.js/</a><br>
<a href="http://www.brunildo.org/test/xheight.pl">http://www.brunildo.org/test/xheight.pl</a><br>
<a href="http://www.icavia.com/2010/09/solving-font-face-alignment-issues/">http://www.icavia.com/2010/09/solving-font-face-alignment-issues/</a><br>
<a href="http://mudcu.be/journal/2011/01/html5-typographic-metrics/">http://mudcu.be/journal/2011/01/html5-typographic-metrics/</a><br>
<a href="http://www.owlfolio.org/htmletc/legibility-of-embedded-web-fonts/">http://www.owlfolio.org/htmletc/legibility-of-embedded-web-fonts/</a><br>
<a href="http://en.wikipedia.org/wiki/Baseline_(typography)">http://en.wikipedia.org/wiki/Baseline_(typography)</a><br>
<a href="http://stackoverflow.com/questions/1134586/how-can-you-find-the-height-of-text-on-an-html-canvas">http://stackoverflow.com/questions/1134586/how-can-you-find-the-height-of-text-on-an-html-canvas</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Zero Todo]]></title><description><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>Todo workflow for inbox zeroists.</p>
</blockquote>
<p>I'm an <a href="http://inboxzero.com/">inbox zeroist</a>; my inbox is my todo list.</p>
<p>For us (well, for me at least) todo applications quikly get neglected. I love their shiny UI's and impressive and thought out UX, but the fact remains that the tasks I so optimistically punch in</p>]]></description><link>https://asbjornenge.com/zero-todo/</link><guid isPermaLink="false">5c881989e05bc20001974df8</guid><category><![CDATA[Tech]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Fri, 08 Mar 2013 20:41:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/inbox-zero-customer-support.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://asbjornenge.com/content/images/2019/03/inbox-zero-customer-support.png" alt="Zero Todo"><p>Todo workflow for inbox zeroists.</p>
</blockquote>
<p>I'm an <a href="http://inboxzero.com/">inbox zeroist</a>; my inbox is my todo list.</p>
<p>For us (well, for me at least) todo applications quikly get neglected. I love their shiny UI's and impressive and thought out UX, but the fact remains that the tasks I so optimistically punch in never get done. I've tried numerous approaches. My read-later services are filled to the brim by awesomeness that will never get parsed by anyone but <a href="https://twitter.com/marcoarment">@marcoarment</a>'s robots.</p>
<p>I always return to my inbox, so whatever gets in there gets action.</p>
<p>The following is an attempt to simplify adding &quot;tasks&quot; to my inbox.</p>
<h2 id="postfix">Postfix</h2>
<p>Get your local postfix relaying to a proper smtp server. I followed <a href="http://www.garron.me/mac/postfix-relay-gmail-mac-os-x-local-smtp.html">this guide</a> for gmail. Be sure to also add the following to <em>/etc/postfix/main.cf</em>.</p>
<pre><code>smtp_sasl_security_options = noanonymous
</code></pre>
<p><strong>PS!</strong> See the update section at the bottom of this article.</p>
<h2 id="mail">$ mail</h2>
<p>Now you can send emails from your shell.</p>
<pre><code>df -h | mail -s &quot;Disk usage&quot; you@domain.io
</code></pre>
<h2 id="hotkey">Hotkey</h2>
<p>There are multiple ways to have a hotkey execute a script. I choose Alfred because I like Alfred and because it has support for passing any selected text as an argument to the script.</p>
<p>Add your extension. It might be a good idea to click <em>Advanced</em> and configure escaping. Mail seems to handle all these chars nicely, so I just unchecked it all.</p>
<img width="500px" src="https://github.com/asbjornenge/asbjornenge.github.com/raw/master/img/wwc/zero_todo/screenshot1.png" alt="Zero Todo">
<p>Add a hotkey for that extension and check &quot;Selected text in OS X&quot;.</p>
<img width="500px" src="https://github.com/asbjornenge/asbjornenge.github.com/raw/master/img/wwc/zero_todo/screenshot2.png" alt="Zero Todo">
<p>An thats it, you can now select any text in osx and stack it on top of your inbox by pressing your specified hotkey.</p>
<h2 id="update">Update</h2>
<p>For Yosemite I had to add the following to <em>/etc/postfix/main.cf</em></p>
<pre><code>smtp_sasl_mechanism_filter = plain
</code></pre>
<p>Found the solution <a href="http://stackoverflow.com/questions/26447316/mac-os-x-10-10-yosemite-postfix-sasl-authentication-failed">here</a>. Thanks!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[JSON Schema Validation]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Your probably talking JSON with a RESTful api, right?<br>
If you care about creating a great experience, you need to take error handling seriously. Handling timeouts and http error codes is pretty straight forward, but handling corrupt data can be tricky. It often leaves an ugly footprint in your code.</p>]]></description><link>https://asbjornenge.com/json-schema-validation/</link><guid isPermaLink="false">5c881654e05bc20001974de8</guid><category><![CDATA[Tech]]></category><category><![CDATA[json]]></category><dc:creator><![CDATA[Asbjorn Enge]]></dc:creator><pubDate>Sun, 10 Feb 2013 20:33:00 GMT</pubDate><media:content url="https://asbjornenge.com/content/images/2019/03/1200px-JSON_vector_logo.svg.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://asbjornenge.com/content/images/2019/03/1200px-JSON_vector_logo.svg.png" alt="JSON Schema Validation"><p>Your probably talking JSON with a RESTful api, right?<br>
If you care about creating a great experience, you need to take error handling seriously. Handling timeouts and http error codes is pretty straight forward, but handling corrupt data can be tricky. It often leaves an ugly footprint in your code. Lot's of <strong>if</strong>'s and <strong>hasOwnProperty</strong>'s. Instead, using <a href="http://json-schema.org">json-schema</a>, you can validate your JSON data first and be sure it is as expected.</p>
<h2 id="jsonschema">JSON-Schema</h2>
<blockquote>
<p>A JSON Media Type for Describing the Structure and Meaning of JSON Documents</p>
</blockquote>
<p>Example; If you have some JSON Data:</p>
<pre><code>{
	&quot;title&quot; : &quot;Kapsokisio&quot;
}
</code></pre>
<p>You can define a corresponding JSON Schema:</p>
<pre><code>{
	&quot;type&quot; : &quot;object&quot;,
	&quot;required&quot; : [&quot;title&quot;],
	&quot;properties&quot; : {
		&quot;title&quot; : { &quot;type&quot; : &quot;string&quot; } 
	}
}
</code></pre>
<p>You can validate your data using that schema. If it is valid, you can be sure this data is an object with a title property of type string.</p>
<h3 id="aidspecificationaspecification"><a id="specification"></a> Specification</h3>
<p>The latest <a href="http://www.ieft.org">IETF</a> draft is currently <a href="http://tools.ietf.org/html/draft-zyp-json-schema-03">v3</a>, but they have a v4 <em>being prepared for submission in early 2013</em>. <strong>This post will focus on v4</strong>.</p>
<p><strong>UPDATE</strong><br>
The new drafts are up!</p>
<p><strong>Core:</strong><br>
<a href="http://tools.ietf.org/html/draft-zyp-json-schema-04">http://tools.ietf.org/html/draft-zyp-json-schema-04</a><br>
<strong>Validation:</strong><br>
<a href="http://tools.ietf.org/html/draft-fge-json-schema-validation-00">http://tools.ietf.org/html/draft-fge-json-schema-validation-00</a></p>
<h3 id="software">Software</h3>
<p>There is a variety of <a href="http://json-schema.org/implementations.html">implementations</a> available. Since I choose to focus on v4 and since I'm a webnerd, <strong>I'll be using the <a href="https://github.com/geraintluff/tv4">tv4</a> validator for the examples</strong>.</p>
<h2 id="usage">Usage</h2>
<p><strong>NB!</strong> This article is in no way a usage reference!!<br>
It's more a collection of the things I stubled across trying to figure out how this JSON-Schema thing works. Some important bits, and some of the things I found really useful. See the <a href="#further">further reading</a> section for more possibilites and options.</p>
<h3 id="type"><em>type</em></h3>
<p>Using &quot;type&quot; you can specify the datatype required for the current object. The value can be a string or an array. Available values are; <strong>object, array, string, boolean, integer, number, null</strong>. The following requires the data to be either an object or a string.</p>
<pre><code>{
	&quot;type&quot; : [&quot;object&quot;,&quot;string&quot;]
}

tv4.validate({}, schema) // true
tv4.validate([], schema) // false
</code></pre>
<h3 id="enum"><em>enum</em></h3>
<p>Using &quot;enum&quot; you can define an array with elements of any type. Data must be equal to one of the elements to validate.</p>
<pre><code>{
	&quot;enum&quot; : [[1,true,0], {}, 28, &quot;Burbon&quot;]
}

tv4.validate([1,true,0], schema) // true
tv4.validate(34, schema) // false
</code></pre>
<h3 id="required"><em>required</em></h3>
<p>Using &quot;required&quot; you can define an array of required properties. It's value is an array of strings.</p>
<pre><code>{
	&quot;required&quot; : [&quot;title&quot;,&quot;origin&quot;]
}

tv4.validate({&quot;title&quot; : &quot;&quot;, &quot;origin&quot; : &quot;&quot;}, schema) // true
tv4.validate({&quot;title&quot; : &quot;&quot;}, schema) // false
</code></pre>
<h3 id="properties"><em>properties</em></h3>
<p>Using &quot;properties&quot; you can further specify an objects properties. It is an object where each value is a separate schema.</p>
<pre><code>{
	&quot;properties&quot; : {
		&quot;title&quot;   : { &quot;type&quot; : &quot;string&quot; },
		&quot;weight&quot;  : { &quot;type&quot; : &quot;number&quot; }
	}
}

tv4.validate({&quot;title&quot; : &quot;&quot;, &quot;weight&quot; : 2}, schema) // true
tv4.validate({&quot;title&quot; : &quot;&quot;, &quot;weight&quot; : &quot;2&quot;}, schema) // false
</code></pre>
<h3 id="items"><em>items</em></h3>
<p>Using &quot;items&quot; you can specify the requirements for the items in an array. It can be a single schema or an array of schemas. The following requires the elements in this array to be a string or an object.</p>
<pre><code>{
	items : [
		{ &quot;type&quot; : &quot;string&quot; },
		{ &quot;type&quot; : &quot;object&quot; }
	]
}

tv4.validate([&quot;&quot;,{}], schema) // true
tv4.validate([&quot;&quot;,true], schema) // false
</code></pre>
<h3 id="pattern"><em>pattern</em></h3>
<p>Using &quot;pattern&quot; you can validate using regular expressions. Powerful stuff!</p>
<pre><code>{
	&quot;properties&quot; : {
		&quot;url&quot; : { &quot;type&quot; : &quot;string&quot;, &quot;pattern&quot; : /(http|ftp|https):\/\/[\w-]+(\.[\w-]+)+([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-])?/ }
	}
}

tv4.validate({&quot;url&quot; : &quot;http://google.com&quot;}, schema) // true
tv4.validate({&quot;url&quot; : &quot;htt:/googleco.m&quot;}, schema) // false
</code></pre>
<h3 id="ref"><em>$ref</em></h3>
<p>Using &quot;$ref&quot; you can reference other schemas. You can use a URI or an # for internal referencing. Using <em>definitions</em> as a location for your internal referenced schemas is not a rule but a common practice.</p>
<pre><code>{
	&quot;items&quot; : { 
		&quot;$ref&quot; : &quot;#/definitions/bean&quot;
	},
	&quot;definitions&quot; : {
		&quot;bean&quot; : {
			&quot;type&quot; : &quot;object&quot;,
			&quot;required&quot; : [&quot;origin&quot;],
			&quot;properties&quot; : {
				&quot;origin&quot; : { &quot;enum&quot; : [&quot;kenya&quot;,&quot;rawanda&quot;] }
			}
		}
	}
}

tv4.validate([{&quot;origin&quot; : &quot;kenya&quot;}], schema) // true
tv4.validate([{&quot;origin&quot; : &quot;brazil&quot;}], schema) // false
tv4.validate([&quot;kenya&quot;,&quot;rawanda&quot;], schema) // false
</code></pre>
<h3 id="allof"><em>allOf</em></h3>
<p>Using &quot;allOf&quot; you can define an array of schemas where your data elements must validate against all of them.</p>
<pre><code>{
	&quot;allOf&quot; : [
		{ &quot;type&quot; : &quot;integer&quot; },
		{ &quot;minimum&quot; : 6 }
	]
}

tv4.validate(6, schema) // true
tv4.validate(5, schema) // false
</code></pre>
<h3 id="oneof"><em>oneOf</em></h3>
<p>Using &quot;oneOf&quot; you can define an array of schemas where your data elements must validate against one (and only one) of them.</p>
<pre><code>{
	&quot;oneOf&quot; : [
		{ &quot;type&quot;    : &quot;integer&quot; },
		{ &quot;minimum&quot; : 6 }
	]
}

tv4.validate(5, schema) // true
tv4.validate(6, schema) // false
</code></pre>
<h3 id="anyof"><em>anyOf</em></h3>
<p>Using &quot;anyOf&quot; you can define an array of schemas where your data elements can validate against any (at least one) of them.</p>
<pre><code>{
	&quot;anyOf&quot; : [
		{ &quot;type&quot;    : &quot;integer&quot;  },
		{ &quot;minimum&quot; : 6 }
	]
}

tv4.validate(5, schema) // true
tv4.validate(6, schema) // true
</code></pre>
<h3 id="not"><em>not</em></h3>
<p>Using &quot;not&quot; you can define a schema your data elements should to not validate against.</p>
<pre><code>{
	&quot;not&quot; : { &quot;type&quot; : &quot;string&quot; }
}

tv4.validate(1, schema) // true
tv4.validate(&quot;test&quot;, schema) // false
</code></pre>
<h3 id="errorhandling">Error handling</h3>
<p><strong>(tv4 specific)</strong></p>
<p>I just thought I'd quickly mention how tv4 handles a failure:</p>
<pre><code>tv4.validate([],{&quot;type&quot; : &quot;object&quot;})
var err = tv4.error
while(err != null) {
	console.log(err.message, err.schemaPath, err.dataPath)
	err = err.subErrors
}
</code></pre>
<h2 id="aidfurtherafurtherreading"><a id="further"></a>Further reading</h2>
<p>I would really recommend reading through the <a href="https://github.com/geraintluff/tv4/tree/master/tests/tests">tests for tv4</a>, they provide excellent usage examples for the different possibilites. On the JSON-Schema <a href="http://json-schema.org/">website</a> you will find the <a href="http://json-schema.org/documentation.html">documentation</a> and some great <a href="http://json-schema.org/example2.html">examples</a>.</p>
<h2 id="pros">Pros</h2>
<p>One of the biggest benefits of using JSON-Schema validation is that it will allow you a cleaner codebase. You can trust your data. That in turn improves readability and maintainability which leads to better and more robust applications. In the end; a better user experience.</p>
<h2 id="cons">Cons</h2>
<p>It can be quite tedious building a good schema describing your data. And of course, if you change your data structures, you need to update your schema (in addition to your code). But considering how this approach will simplify your codebase, I would definately say it's well worth it.</p>
<h1 id="realworldexample">Real world example</h1>
<p><strong>Data</strong></p>
<pre><code>{
	&quot;title&quot;   : &quot;Kapsokisio&quot;,
	&quot;origin&quot;  : &quot;Kenya&quot;,
	&quot;variety&quot; : [&quot;SL28&quot;,&quot;SL34&quot;,&quot;Burbon&quot;],
	&quot;process&quot; : &quot;Washed&quot;,
	&quot;roast&quot; : {
		&quot;level&quot; : 4,
		&quot;date&quot;  : &quot;08.02.2012&quot;
	},
	&quot;bag&quot; : {
		&quot;weight&quot; : 354,
		&quot;date&quot;   : &quot;08.02.2012&quot;
	},
	&quot;brew_tip&quot; : {
		&quot;method&quot; : &quot;pourover&quot;,
		&quot;grind&quot;  : &quot;medium&quot;,
		&quot;vessle&quot; : &quot;chemex&quot;
	}
}
</code></pre>
<p><strong>Schema</strong></p>
<pre><code>{
	&quot;type&quot; : &quot;object&quot;,
	&quot;required&quot; : [&quot;title&quot;,&quot;origin&quot;,&quot;variety&quot;,&quot;process&quot;,&quot;roast&quot;,&quot;bag&quot;],
	&quot;properties&quot; : {
		&quot;title&quot;    : { &quot;type&quot; : &quot;string&quot;  },
		&quot;origin&quot;   : { &quot;type&quot; : &quot;string&quot;  },
		&quot;variety&quot;  : { &quot;type&quot; : &quot;array&quot;   },
		&quot;process&quot;  : { &quot;type&quot; : &quot;string&quot; },
		&quot;bag&quot;      : { &quot;$ref&quot; : &quot;#/definitions/bag&quot; },
		&quot;roast&quot;    : { &quot;$ref&quot; : &quot;#/definitions/roast&quot; },
		&quot;brew_tip&quot; : { &quot;$ref&quot; : &quot;#/definitions/brew_tip&quot; }
	},
	&quot;definitions&quot; : {
		&quot;roast&quot; : {
			&quot;type&quot; : &quot;object&quot;,
			&quot;required&quot; : [&quot;level&quot;, &quot;date&quot;],
			&quot;properties&quot; : {
				&quot;level&quot; : { &quot;type&quot; : &quot;integer&quot; },
				&quot;date&quot;  : { 
					&quot;type&quot; : &quot;string&quot;, 
					&quot;pattern&quot; : /^\d{2}([./-])\d{2}\1\d{4}$/
				}
			}
		},
		&quot;bag&quot; : {
			&quot;type&quot; : &quot;object&quot;,
			&quot;required&quot; : [&quot;weight&quot;, &quot;date&quot;],
			&quot;properties&quot; : {
				&quot;weight&quot; : { &quot;type&quot; : &quot;number&quot; },
				&quot;date&quot;   : { 
					&quot;type&quot; : &quot;string&quot;, 
					&quot;pattern&quot; : /^\d{2}([./-])\d{2}\1\d{4}$/
				}
			}
		},
		&quot;brew_tip&quot; : {
			&quot;type&quot; : &quot;object&quot;,
			&quot;required&quot; : [&quot;method&quot;,&quot;grind&quot;,&quot;vessle&quot;],
			&quot;properties&quot; : {
				&quot;method&quot; : { &quot;type&quot; : &quot;string&quot; },
				&quot;grind&quot;  : { &quot;type&quot; : &quot;string&quot; },
				&quot;vessel&quot; : { &quot;type&quot; : &quot;string&quot; }
			}
		}
	}
}</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>