Skip to content

Cross-Account use of AWS CLI

The documentation around using the AWS CLI from an AWS EC2 instance on one account to access resources in another account are not great. The information is all there, somewhere, but it’s scattered across many places and to derive what you need from those sources you have to pretty well read all the sources. Two useful places to begin, but you will need to spiral out from, are:

However, I’ll try to give a summary and simple example here. This won’t include code or detailed instructions to set this up, although I hope to follow this up with a code demonstration expressed in Terraform.

(Continued)

Oh no! The certificate has expired!

Hey kids! You know those SSL certificates you obtained and installed today?

Yeah, put a reminder in your calendar right now for a week before the expiry date, so you don’t get caught out.

Future you will thank you.

OpenSSL on HighSierra

Recently I finally got around to reading the excellent OpenSSL Cookbook from Ivan Ristić – you can grab a free copy via https://www.openssl.org/docs/ – and the first question in my mind was “what version of OpenSSL is already installed on my Mac”. A quick check showed it’s there pre-built in HighSierra in /usr/bin:

$ /usr/bin/openssl version
LibreSSL 2.2.7

Hmm. Wikipedia tells me that this is a (somewhat controversial) OpenBSD fork from around April 2014, so not necessarily in synch with the “official” OpenSSL code.

I elected to download and build from the OpenSSL code instead, partly so that it would be easier for me to keep it updated. The instructions for obtaining and building it are quite clear, although you have a clear choice to make – pull the code from GitHub, or download it as a tarball. I opted for the latter. The build assumes you have XCode and all the other bits and pieces installed, but then again you would be unlikely to be doing this if you didn’t. The build went just as it said on the box:

./Configure darwin64-x86_64-cc shared enable-ec_nistp_64_gcc_128 no-ssl3 no-comp --openssldir=/usr/local/ssl
make depend
sudo make install

after which we can verify success:

$ openssl version -a
OpenSSL 1.1.0h  27 Mar 2018
built on: reproducible build, date unspecified
platform: darwin64-x86_64-cc
options:  bn(64,64) rc4(16x,int) des(int) idea(int) blowfish(ptr) 
compiler: cc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG -DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/local/ssl\"" -DENGINESDIR="\"/usr/local/lib/engines-1.1\"" 
OPENSSLDIR: "/usr/local/ssl"
ENGINESDIR: "/usr/local/lib/engines-1.1"

“But wait!” you say, “haven’t we just overwritten the LibreSSL installation”.

Fortunately not – unless we tell it otherwise, the build drops the binaries in /usr/local/bin rather than /usr/bin where the MacOS original install is. As long as your $PATH specifies /usr/local/bin before /usr/bin you will get the expected one:

$ openssl version
OpenSSL 1.1.0h  27 Mar 2018
$ /usr/bin/openssl version
LibreSSL 2.2.7

There’s one more thing that you should do for convenience. The SSL directory created contains an empty folder /usr/local/ssl/certs – the intention is that well known trusted root certificates can be placed there, which is a slightly opaque task.

What I did was grab the set of root certificates maintained by Mozilla:

https://hg.mozilla.org/mozilla-central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt

but that then needs to be converted from the format they use to a standard PEM format. There’s a number of projects to help with this, the one I selected was from agl/extract-nss-root-certs. The only pre-requisite was to install Go, then the tool Just Worked and I dropped the resulting PEM in /usr/local/ssl/certs

If you’re not familiar with OpenSSL, it’s worth having a look at. It’s an absolute Swiss-Army-Chainsaw of a tool, able to do do everything from generating keys to checking site security to running as a server – Ristić’s book covers all of the usual things you might want to do, and some unusal ones, and helps demystify what is generally well thought of as a user interface from hell.

An example of what you can do? Well, here’s creation of a self-signed certificate:

# build RSA private key
openssl genrsa -aes256 -out fd.key 2048
openssl rsa -text -in fd.key -noout

# extract public key
openssl rsa -in fd.key -pubout -out fd-public.key

# create CSR with prebuilt config file
openssl req -new -config fd.cnf -key fd.key -out fd.csr
openssl req -text -in fd.csr -noout

# create self-signed certificate
openssl x509 -req -days 365 -in fd.csr -signkey fd.key -out fd.crt -extfile fd.ext
openssl x509 -text -in fd.crt -noout

the fd.cnf was:

[req]
prompt = no
distinguished_name = dn
req_extensions = ext
input_passphrase = letmein

[dn]
CN = example.net
emailAddress = admin@example.net
O = Example Co
L = London
C = GB

and fd.ext was

subjectAltName = DNS:example.net;DNS:*.example.net

Addendum
The talented John Slee points to some background on the LibreSSL fork that is worth having a look at:

The OpenBSD folks have maintained OpenSSL 1.0.x API compatibility and AFAIK are currently working on 1.1.x APIs too. They have also created their own TLS API that is much, much more resistant to incorrect/insecure use, named “libtls”. The changes they made vs. where OpenSSL was at the time of the fork are worth a look, I think — an object lesson in secure C coding

See:
https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.7.2-relnotes.txt
https://man.openbsd.org/tls_init.3

TLS 1.3 – It’s like Christmas

Via The Register I see that TLS 1.3 has finally rolled off the standards and committee draft assembly line. This is pretty big news, not least because we’ve been working with the current TLS 1.2 standard for almost a decade, and the defects in it have well and truly been discovered and exploited.

There’s a good number of reasonable news articles around about this that are worth reading for more detail than I’m going to give you here, such as this nice one from CSO Online which gives a background on what TLS is all about. You might also like to brush up on what the TLS/SSL handshake is, as this is one of the places where some of the nastiest exploits of TLS 1.2 and earlier have been found.

I can also recommend the articles from eWeek and Kinsta, which cover off different aspects in somewhat more detail than the CSO Online article.

So. What do we get out of TLS 1.3?

  • It’s faster than it’s predecessors, partially through improving the initial handshake algorithm, and partially through the way it deals with ongoing encryption of traffic after the handshake;
  • it deprecates the use of a lot of broken cryptographic algorithms, enforcing very strong encryption (although servers are allowed to back down to TLS 1.2 if that’s all that the client supports);
  • it plugs a lot of security flaws, particularly the horrible ones where the handshake is compromised and where a man-in-the-middle can silently intercept traffic;
  • it’s much more resistant to attacks that involve spoofing the server or client identity;
  • it supports forward security, giving strong protection against the loss of the server’s private key.

One thing that arises from TLS 1.3 that’s going to upset a lot of traditional security officers though is that it pretty well breaks any current ability to examine traffic on the network via deep packet inspection and passive monitoring – the article at The Register has some good discussion of this. My opinion is that this is actually a good thing. A broad and general bad habit has arisen over the last decades where there is a reliance on perimeter and network security, and endpoint and application security is considered too hard. This is not a sustainable way of thinking, and is demonstrably one of the root causes of a lot of the big data leaks we’ve seen (feel free to argue about this with me, I’d love the debate). Going forward let’s instead switch to envisioning our systems as places where we perform secure and safe computation against well protected data stores in a hostile and dangerous network environment. If we assume that our communications are compromised, we can focus on preventing compromise of the servers and clients, and be as safe operating in the cloud as in private data centers.

One caveat around the state of this though – currently a lot of the public focus is on adoption of TLS 1.3 within browsers. The more interesting question is whether it’s been rolled into the operating systems as well, and to what extent. Right at the time of writing, the answer is “sort of, watch this space”. There’s some notes for Windows
MacOs/iOs out there, but I would expect that the major OS vendors will silently roll this out pretty quickly.

Bootstrapping AWS with Terraform and CodeCommit

A rough model that I’ve been working on and thinking about recently is for the AWS account (or accounts) be put together so that there’s a “bastion” or “bootstrap” instance that can be used to build out the rest of the environment. There is a certain chicken-and-egg around this, particularly if you want to use AWS resources and services to bootstrap this up.

I’m going to talk (at length) about a solution I’ve recently gotten sorted out. This has a certain number of pre-requisites that I’ll outline before getting into how it all hangs together. The key thing around this is to limit as far as possible any manual tinkering about, and script up as much as possible so that it is both repeatable and able to be exposed to standard sorts of code cutting practices.

One caveat around what I’m presenting – the Terraform state is stored locally to where we are running Terraform, which is not best practice. Ideally we’d be tucking it away in something like S3, which I will probably cover at a later point.

(Continued)

Workshop, Mark II

I’ve moved my workshop to a new location, which has the advantage of security, lower cost, and a far more pleasant location. Also, apparently it’s a studio now, if only I could either monetise it or adopt the life of a penniless bohemian starving artist.


Step one was, with the aid of some friends, to move everything from one location to another

and then scrape things up off the floor

I went back today and erected some IKEA shelves, and a table (out of shot), which brings me much closer to the target shape of the space:

The crappy shelves to the right of the image will be torn apart, and I will build a timber rack there. So at the moment the rough plan of action is:

  • get a saw-set so that I can properly sharpen the rip saws
  • sharpen all the saws
  • make the timber rack
  • make the arming rack for home

On this last, the studio complex runs a scheme where you can drop of items or materials you don’t have a need for, and they are free for the taking and re-use. This may work out well for me, as there are a number of more professional woodworkers and furniture makers on the site, and I’ve already scored some giant planks of oak that I will be able to cut down for the arming rack – it’s quite likely that I won’t need to purchase any timber at all for that project, as long as I’m happy to rip materials down to my desired dimensions.

I need the exercise.

Creating a custom Kylo Sandbox

I had a need – or desire – to build a VM with a certain version of NiFi on it, and a handful of other Hadoop-type services, to act as a local sandbox. As I’ve mentioned before, I do find it slightly more convenient to use a single VM for a collection of services, rather than a collection of Docker images, mainly because it allows me to open the bonnet of the box and get my hands dirty fiddling with the insides of the machine. Since I wanted to be picky about what was getting installed, I opted to start from scratch rather than re-using the HDP or Kylo sandboxes.

The only real complication was that I realised that I also wanted to drop Kylo on this sandbox, which happened after I’d already gone down the route of getting NiFi installed. This was entertaining as it revealed various ways in which the documentation and scripts around installing Kylo have some inadvertent hard-wired assumptions about where and how NiFi is installed that I needed to work around.

(Continued)

Smoke testing Kafka in HDP

Assuming that you have a vanilla HDP, or the HDP sandbox, or have installed a cluster with Ambari and added Kafka, then the following may help you to smoke test the behaviour of Kafka. Obviously if you’ve configured Kafka or Zookeeper to be running on different ports, this isn’t going to help you much, and it also assumes that you are testing on one of the cluster boxes, and a ton of other assumptions.

The following assumes that you have found and changed to the Kafka installation directory – for default Ambari or HDP installations, this is probably under /usr/hdp, but your mileage may vary. To begin with, you might need to pre-create a testing topic:

bin/kafka-topics.sh 
    --zookeeper localhost:2181 \
		--create --replication-factor 1 \
		--partitions 1 \
		--topic test

then in one terminal window, run a simple consumer:

bin/kafka-console-consumer.sh \
    --zookeeper localhost:2181 \
		--topic test \
		--from-beginning

Note that this is reading from the beginning of the topic, if you want to just tail the recent entries, omit the --from-beginning instruction. Finally, in another terminal window, open a dummy producer:

bin/kafka-console-producer.sh \
    --broker-list localhost:6667 \
		--topic test

There is an annoying asymmetry here – the consumer and most other utilities look to ZooKeeper to find the brokers, but the dummy producer requires an explicit pointer to one or more of the brokers. On this consumer window, type stuff, and you should see it echoed realtime in the consumer window. When finished, ^C out of the producer and consumer, and consider your work done.

Lies, Damned Lies and Programmers

I recently came across a really nice set – not directly related – of articles dealing with various profound errors that programmers and system designers fall into when dealing with names and addresses.

The TL;DR if you don’t read these: names and addresses are hard and most things you believe about them are wrong.

Let’s start with Falsehoods Programmers Believe About Names. Without even trying the author lists 40 things we believe about names that are just plain wrong.

In a similar vein, Falsehoods programmers believe about addresses, which particularly speaks to me. One of the fundamental errors about addresses is to think they identify a location. This is incorrect. An address might identify a location, but it is fundamentally a description which instructs a postman how to deliver a letter or parcel. Substitute pizza operative, Amazon driver or writ server as desired.

Even without getting into the weirdness around the actual shape of the planet, Falsehoods programmers believe about geography touches on place names.

And as a bonus: Falsehoods programmers believe about time – computers prove to be pretty bad clocks, and working out a calendar is very complicated.

A Demonstration NiFi Cluster

In order to explore NiFi clustering, and NiFi site-to-site protocol, I decided that I could use a minimal installation – as I’m really just exploring the behaviour of NiFi itself, I don’t need to have any Hadoop environment running as well. To this end, my thought was that I could get the flexibility to just play around that I need by building a minimal Centos/7 virtual machine, running in VirtualBox. The plan was to have little more than a Java 8 SDK and NiFi installed on this, and then I would clone copies of it which would be modified to be independent nodes in a cluster. At the time of writing this is still in progress, but I thought it was worth capturing some information about how I proceeded to get my VM prepared.

There are a handful of requirements for this VM:

  1. It needs a static IP (so that I can assign different static IPs to the clones, later)
  2. It needs to be able to reach out to the broader internet, in order to pull down OS updates and similar
  3. I need to be able to ssh to it from my desktop
  4. Different instances of the VM need to be able to reach each other easily
  5. A Java 8 JVM is needed

(Continued)