An introduction to using CloudFormation. Building a simple but usable VPC.Read More
We all know that standardizing your microservice APIs is crucial to having a maintainable overall system, but don't forget about standardizing that other critical microservice interface: the monitoring interface.Read More
In 2015 I took it upon myself to pass all five of the AWS certification exams. For me it was a way to expand my breadth of knowledge of some primary AWS services. I saw it as a structured, measured studying regime to follow on nights and weekends. Not for everyone, but for me, I feel the AWS domain knowledge gained was well worth the time and expense.
Since passing all the exams, I have been asked a few times how I approached studying for the exams (I studied on my own, no boot camps). So here is was worked for me:
- If you haven't already, create a new AWS account so you are eligible for free tier. Use this for your studying in order to save some money.
- Read all the mentioned white papers, twice. Once at the start of your study effort, then again about 3/4's into your effort. This will help make connections to what is covered in the white paper and what you've studied in other sources.
- Read all relevant product FAQ page's (Cloudformation, IAM, EC2, etc). The FAQ's are full of well answered questions about the capabilities of the products (which is what the exam is right?)
- Get as much experience using the products as possible. Nothing beats real experience. Build hello world applications, deploy them, create Cloudformation templates for them, etc. Read all the help links in the AWS Console while doing all this. If the product's "Getting Started" docs have some sort of "build this" tutorial, do it, don't just read it, but actually go through the steps.
- Take the practice exam. Use this exam as a focussing aid. Research each question until you are confident you have the correct answer. This will help deepen your knowledge of the of the topic the question is concerned with and then aid your answers to related questions on the actual exam. Use the percentage breakdown of your practice exam results to determine where you need to study more. e.g. if you score low in "Security" double your efforts in reviewing AWS security topics.
- Be well rested for the exams, no late night cramming the night before. These are long exams (170min for the DevOps Professional exam). A well rested brain is a requirement.
Hope this helps others looking to take one or more of the exams. Let me know what study habits worked for yourself.
When designing any software system you are always making assumptions. Be it an extension to an existing system or a new greenfield project. In either case you will be dealing with new libraries, products and/or patterns. You'll of course use your existing knowledge, experience, and quick reviews of documentation to plot a course for the system to be developed, but the fact is much of the new system will be based on assumed functionality and you will likely be unaware of some key functionality of the new tech.
Discovery of key features and proving and/or disproving assumptions as soon as possible is paramount to a successful software project. A very similar point of view in prevalent in software testing. You want automated testing to run as soon as code is committed to a feature branch. This will alert developer's to bugs they have introduced very early, enabling them to correct the issue quickly before other developers build on top of this defective code. The goal is the same with architecture design, discover false assumptions and unknown functionality as soon as possible else you will build upon those false assumptions and/or not leverage functionality unknown to you. You will most likely discover these issues later in process, but at a latter stage, there will be a much higher the cost to accommodate the changes in the system said discoveries will necessitate.
Say we have this premise:
A new project to be developed involves a REST API to enable a CRUD (Create, Read, Update, Delete) interface to a Customer resource. Based on the developed model for Customer and the predicted usage patterns, it is felt that DynamoDb will be an ideal persistence engine for the resource though the team has not utilized DynamoDb before.
So, given this premise, as a first step in order to accelerate discovery, you will want to implement a, thin as possible, vertical slice through it from end to end. A good course of action in this example would be to leverage a micro-framework such as Sinatra, or Flask to very quickly, in just a couple lines of code, have REST interface for your Customer resource in place. Keep the resource thin, in this example, just give it a name attribute to start. Also don't worry about validation or design all that much at this point. Then just implement the DynamoDb interactions in the appropriate REST framework methods, not worrying about well designed, DRY code. You just want the happy path functional allowing you to perform your CRUD operations on data in DynamoDB via the REST interface.
You can now use this micro app to test any assumptions made in the initial design, and/or quickly discover any sort of quirks of the new tech. For instance, DynamoDb supports conditional updates which can be utilized to support Upserts. You might miss this feature if you just scan the docs, and end up implementing much of your system before discovering this feature. By implementing your micro app very early, your chances of discovery of this feature are greatly increased which will inform the design of the system from the beginning, alleviating you from the pain and high cost of wedging these changes in at a later stage of the project. As another example you may have assumed you would be able to retrieve items from dynamo based on any item attribute (name, age, height, etc). Although this is possible via table scanning or adding extra indexes to your table, there are definite implications and restriction in doing so. Having a system built quickly, that is leveraging the features of DynamoDb (including Item retrieval), increases the the chance that you will discover these gotchas early and as a bonus have a realistic system in place to investigate these newly discovered aspects, e.g., create an index on a table and show that index can be leveraged to get an Item via the REST interface.
So with your thin slice of functionality up and running, go through as many experiments as you deem appropriate. You need to be confident that you've discovered what is needed. There is no magic formula and it is a learned skill, so you will improve as you apply this strategy to more and more projects.
You may be tempted to simply evolve your vertical slice codebase into the final system. I'd warn against this for a couple of reasons. FIrst, if you go in with the perception that you may evolve this thin slice into the ultimate final product, you may be susceptible to over building it, thus it will take longer to build and we want discovery as soon as possible so we do not block the actual implementation of the system. Secondly, this slice app will be built outside of your development team's prescribed development process. Since you want it to be as quick an exercise as possible, it won't need to check all the boxes when is come to documented stories, points, TDD, and such. Thus you do not want to set precedent of initiating a code base destined for production in such an out of process manner.
I've used this strategy in many software projects and it has always proved very beneficial and a key components of the project's ultimate success. This practice demonstrates pragmatism, simplicity and bias to action which are key tenets that every software development team should practice and are essential for remaining competitive in today's software market.