Monday, January 26, 2015

Learning how to learn - A Coursera MOOC

Let me start off by a little introduction: I will be using my blog to complete an assignment for a MOOC I am completing. The MOOC is "Learning how to learn" offered by UC San Diego and led by Dr. Barbara Oakley and Dr. Terrence Sejnowski. More about that here.

For this assignment I will be covering three aspects that are explained in the course, that'll make you a better learner and I’m planning to use to tackle the issues I know - by experience - I am struggling with when it comes to learning.

But first, something about me: I am currently aiming to get more proficient in my learning abilities. For me that means first learning faster and with less frustration, making the most of the invested time enabling me better to achieve the goals I set for myself. Those goals are: Certain IT certifications, Coursera IT MOOC’s, learning IT standards. Almost anything I set out to learn has something to do with my professional live. For 2015, my personal goals include earning a new IT certification, starting up a professional blog and learning a new IT standard.

With the above personal goals (and to fulfil the mission of the course assignment) I’ll be explaining three core concepts - and how they relate to each other - first in words (which will be coming in a minute) and then synthesizing them in a mind map, not only to explain the concepts but also by looking at the broader picture (something important generally when trying to master and successfully apply concepts) and to visually document how the concepts intertwine and - - reinforce each other to get you to learn better and be more efficient. I’ll also be enriching the provided material by providing links to tools which I am using to apply the concepts in practice. When I do so, I’ll provide the link to the source material/tool/website so that you yourself can use them to when you find it  applicable.

The thee main concepts I’ll be explaining are: Procrastination, Recall and Chunks. All three are of great influence on how you set out to learn (and master) something.

Procrastination : the art of postponing

Procrastination is something that happens almost automatic. When you need to do something that seems hard (note the “seems”), your brain activates a process to get you out that uncomfortable thought (the fact you need to that “hard” task) to improve “the here and now.” It is a reflex of your brain. The point is: you can control that reflex once you know it is there without applying much “willpower”. “How ?” you’ll ask. There are three main methods you’ll need to apply :

1. Maintain a task list: At the end of each day create a list of things to do the next day. I use the webapp Todoist for this.
2. Create habit of doing the tasks: Foresee and plan for some time each day to do the tasks you set out to do the previous day. Use the pomodoro technique to keep you focused on the task and let your mind “breathe” between.
3. Focus on making progress (it's is a process) not on finishing (delivering a product): Don’t try to finish until you have the result you wanted, but rather focus (and be content with) a progress to the result.

Small pieces of progress will eventually lead to a finished result. And by completing those “small progresses” you’ll be surprised how fast you get the result you wanted (that seemed very “hard” at first).

Chunks - Information connected through Meaning

Chunks are bits of information that are connected through meaning. They can be small at the beginning when learning a new concept but tend to get bigger as you learn to master the new concept. Forming chunks is an important process during learning. You can improve the forming of chunks by improving your focus, understand what you are learning (“don’t just read the material”) and recall what you can remember just after reading/learning a new part of what you are trying to learn. It also helps to form the bigger picture and learn to apply to newly formed chunks. Applying not only means “How” but also knowing “when” to apply the newly formed chunks. The "when" is best learned by practicing: interleaving relatively easy exercises with more difficult ones. Don't be afraid to make mistakes, because they let you learn something more deeply (each time you make the mistake the probability of making it again gets smaller). Also, it is a great idea to revisit material you already learned, revisiting it so to speak, to deepen your understanding of the material. This wil let your brain chunk the material more effectively.

Recall - Hard but worth it

I briefly touched upon the subject above when writing about chunks. Trying (and really that: just try !) to recall what you can remember after having studied new material (with all books closed) already helps to engrain the new material in your brain by activating the neural pathways that just formed. When you do not recall (or try to), the probability that they’ll dissipate before they have engrained is higher. It is important to realize that it is not a big deal when you cannot remember the nitty gritty details of what you just studied. Just recalling the main ideas is already a worthy exercise and will pay off. Details of the concepts can follow later (upon your next session). In doing so, you’ll gradually engrain the chunks in your memory so that - when ready - they can be used in greater chunks themselves and your knowledge of the subject deepens.

So, with that out of our way, how to these concepts relate to each other? How can you optimally make use of these techniques to make you a better learner? These are some key questions that I’ll answer below, making a logical whole, explaining how I use them and maybe convincing you to do the same.

Lets start with Tasks. Creating Tasks will let you tackle procrastination if used correctly. Create tasks when you think about them (this means: Have your task system always at your disposal: I you’re like me, you’ll think about doing stuff - or at least trying to not forget to do them - all the time), so that you can enter them in your task list, causing you to not have to remember them (This has also the added value that they won’t occupy a “slot” in your working memory.) Creating tasks efficiently and correctly is a science on it’s own. The Getting Things Done book by David Allen is a good starting point if you’re interested. As said, I am using Todoist for my tasks list needs. It works on every platform (Android, iOS, Mac, PC, browser ...)

Next: you’ve learned that recall is ideal for creating chunks. So why not create tasks for that (with a certain time and frequency to them) so that you can “check” that task when you’ve done a “recall” session. That will feed your zombies to give you a feeling of accomplishment. Remember: The Process is more important then Product. So one (one) recall session already sets you on the way to the product (and lets you make progress). Enter these tasks at regular intervals in your task system so you won’t forget and keep them in your task list (which you - at the end of each day - read and refresh). Checking these tasks will also give you a sense of fulfillment when you start checking more and more (“Hey, these week I completed 40 tasks!”) without actually having finished the actual Product. Use the pomodoro technique so that you don’t start overlearning (25 min. of focused attention is enough per session).

After a while, this will become a habit. And that’s where you want to be: create good, effective habits that will help you learn more effectively.

I’ve summed it all up and have tried to give a visual overview about how these methods and concept all work together below (you can download the overview by clicking on this link)

Mind map of Learning Concepts and how they relate to each other

Anyway, I hope that by reading this assignment I’ve showed some added value the course and also demonstrated my understanding of the course material. Ive also provided some useful links to material you yourself can use to get you up to speed in improving your learning.
I wish you a lot of succes in becoming a better learner !

Thanks for reading and assessing  my contribution. You can also leave a comment below if you wish to do so or you have something to share on the topics discussed.

Thank you!

Saturday, May 11, 2013

Model Thinking, Part Two


Just finished the second part of the course Model Thinking, delivered by Scott E. Page of University of Michigan. As said before : good investment of your time. Just thought that I should put up the other part of the mindmap. Just in case...

The second part deals with lyapunov functions, coordination and culture, path dependence, networks, randomness ( and random walks), colonel blotto games, The prisoners' dilemma, collective action, mechanism design,  replicator dynamics and prediction models. To end it all with the a teaser on "the wisdom of crowds".

And you can see, quite some content, so here it is.

Here is the pdf, if you wish to download

Friday, April 12, 2013

Model Thinking, University of Michigan

Hi there,

It's been quiet around here for sometime, I know and acknowledge that. It just is not simple to keep up with a full time job, family, my reading list and this blog. Enough with the excuses already.

Anyway, that's not want I wanted to write about, this is : I recently decided to enlist for a another course via coursera : Model Thinking. For the moment we are about half way through (5 weeks into the course) and have just completed the midterm exam.

I encourage all of you to take the course. It is a super interesting course for anyone who wants to deepen their understanding of how the "world" works in general. Read the intro for the course if you want to know what exactly that means.

For a quick overview of the first 10 sections, you can check the mindmap I made:

 You can download a pdf version here

Wednesday, March 28, 2012

Software Engineering for SaaS, Berkeley

I recently participated in an initiative from the university of California, Berkeley of which I thought it was worth a blog post. As my regular readers know i'm always interested in something new, so I thought I was going to give this initiative a try. And oh boy, was it interesting !

Offering it through Coursera Berkeley decided to open some university colleges for students all around the world. Because persons of all ages, who want to learn should be able to. As Coursera puts it :

Coursera is committed to making the best education in the world freely available to any person who seeks it. We envision people throughout the world, in both developed and developing countries, using our platform to get access to world-leading education that has so far been available only to a tiny few. We see them using this education to improve their lives, the lives of their families, and the communities they live in.

Sounds good doesn't it ? Anyway I subscribed to a number of these courses which I thought where interesting. I recently completed one, and that's the one I want to write about : Software Engineering for Software as a Service. As an Infrastructure Guy I regularly struggle to understand those weird Developer Guys. They regularly use terms like "Unit tests", "Nightly Builds", "Developer Frameworks" and so on. I  thought I had a reasonable good idea of what they meant, but nope : I didn't.

After completing this course, I know have a much better idea of what it means to be a developer in these challenging times. Really. The architecture of a SaaS application is much clearer. Service Oriented Architecture. HTML, CSS, XML, Xpath : how all these come together in creating a SaaS app. How a SaaS app is meant to horizontally scale. How Rails On Ruby works and why it is such good match to create SaaS apps. What Behaviour Driven Development is. What Test Driven Development is. And that's just the tip of the iceberg, I could go on and on. And all of this was supported with videos of lectures, forums, homeworks and Quizzes (Exams if you will). Thought by Armando Fox and David Patterson (The man, the legend!) , it's really quite an experience.

I must honestly admit it was more difficult than I thought combining this with a full time job (and a family for that matter), and I regularly had to deprive myself of some much needed sleep, but I like to think it was  worth it.

I can certainly recommend this course, and probably some others on the coursera site, but as I did not yet complete those I'll stay neutral. Anyway, there are all offered for free , so the only thing they will cost you is the effort you need to put in them (and mind you, it will be a non-trivial effort)

So give your mind a treat, go and subscribe !

Wednesday, January 25, 2012

Nested RHEV 3.0 Beta with VMware Fusion

I wanted to set up a small RHEV lab at home to test some specific ideas I have. I didn't have dedicated hardware available for this so I decided to use my licensed copy of VMware Fusion for this. I knew that nested virtualization is possible with Fusion because of the numerous blogposts you find how to do this with nested ESXi. I don't have a lot of time for the moment, so I'll keep this blogpost short.

I expected to have no issues using the same settings as nested ESXi in fusion for RHEV, but Murphy thought different. I didn't work. Also - to my surprise -  I couldn't find any info on it via google. Apparently nobody tried this before (or cared enough to blog about it). So that's why I'll make a little blog entry myself about what I found. Because it is possible: Nested RHEV inside VMware Fusion on - let's say - an iMac.

First the basics: I assume you know what RHEV is. It consist of a management component called RHEV-M (based on open source oVirt project and a hypervisor component RHEV-H based on KVM, you can read more about it here.)  The installation of the RHEV-M component is pretty straightforward. Just follow basic instructions of Red Hat on how to install on their RHEV 3.0 Beta site.

However, the installation of RHEV-H inside a VM running in VMware fusion proved not so straightforward. Let's start by creating a VM in fusion with 5 GB of disk, and 2 Gb of RAM. Do some basic configuration for this VM (turn off printer sharing, turn off sound card emulation, turn off bluetooth/usb sharing, as you won't be needing those for an nested hypervisor).  The RHEV-H will boot inside the vm from installation media (I assume you already know how to do this) but it will fail to install. First a setting to allow nested virtualization needs to be set. Edit manually the .vmx file of the VM and add

vhv.enable = true

to the file to enable this. Chances are installation will still not work. You need to add another setting to the vmx file manually to disable the extension of the emulated APIC interface.

apic.xapic.enable = FALSE

Installation will work now, but sequential boot of the VM from it's disk will not work. For that add another setting manually in the .vmx file of the VM:

scsi0.virtualDev = "lsisas1068"

to change the emulated driver to a SAS type. Try again. RHEV-H should now boot fine into it's setup screen and you can start configuring your RHEV-H hypervisor(s) and registering them against your RHEV-M installation. Cool.

Hope this was helpful. Any comments? Don't hesitate.

Monday, December 12, 2011

GlusterFS : The Storage of RedHat

I recently visited the OSC2011 and attended a talk named "The Future of Storage : GlusterFS Technical Overview" presented by Patrick van der Bleek. As my regular readers know I have been playing with and reading about GlusterFS for some time now, it should come as no surprise that this presentation got me thinking about some of the gritty details surrounding GlusterFS. For those that are not familiar with GlusterFS, you can get more information here, here, and here. If you're interested and don't know GlusterFS you'd better read those documents first otherwise the rest of this post will be utter nonsense to you.

Anyway, several good questions where asked during the presentation and I felt that they didn't get the attention they deserved mostly because of timing concerns. Before getting into them, you should know that on October 4, 2011 Red Hat announced that it has signed a definitive agreement to acquire Gluster, thereby acknowledging that they also see a bright future for Scale-Out Storage systems one of which is GlusterFS. In the mean time RedHat released a Virtual Storage Appliance allowing virtualized environments to seamlessly deploy GlusterFS on their hypervisors.

So, on with the questions then ?

The first question that was raised was, knowing that Gluster's storage can be accessed in three ways (CIFS, NFS or via native GlusterFS filesytem), if and how in case of using NFS or CIFS the High Availability of the mount was managed. During the presentation it was said that this was not managed at all by Gluster indicating that in the event that one peer of a replicated brick fails and if the client has a mount from that particular peer the mount point will become invalid thus resulting in some impact. While this statement is true several solutions exist for this particular problem while they are not exactly part of the standard Gluster installation. For instance, this document describes briefly how to render your mount points highly available using CTDB.

Another question was if Gluster supported file locking on replicated bricks. Note "replicated". If one client locks a file for editing on one peer, is it also locked on the other peer ? (what if another client is accessing the same file on the other replicated brick ? ). The answer is the default in IT : it depends. If the host is using the native cluster client it will always work. If the client is using CIFS or NFS then it will normally not. Unless you're using a clustered version of NFS or CIFS. For example, CTDB referenced above supports locking across the replicated peers for CIFS mounts. The point is that if the method you use to access the storage from client to server supports clustered locking Gluster will implement it to because Gluster inherently implements POSIX compatible distributed file (flock) and record level (fcntl) locking in the "feature/posix-locks" translator.

Yet another : In a setup where all of the Gluster storage is accessed via NFS mounts, how can you take care of the load balancing between the replicated bricks on the peers ?  You want all peers loaded more or less the same to achieve maximum efficiency. First there is the manual option : you manually balance each NFS mount point so that each peer receives more or less an equivalent piece of the load. A second option could be to implement a round robin DNS balancing mechanism. Another option is that you could implement CTDB and then put a loadbalancer in front of the VIP's with an intelligent algorithm to balance across the peers. Again, none of these are part of the standard Gluster installation, so they add up for additional setup.

And then last but certainly not least : How does Gluster handle split brain scenario's ?  That was the most intriguing question for me. The AFR (Automatic File Replication) translator handles the self-healing of replicated bricks when a de-synchronous situation occurs. You can find more info on how the AFR handles different situations here and here. Basically the AFR translator looks for the brick containing the inode with the highest number of pending metadata operations and considers that as the authoritative source and replicates it to the other bricks. It does this on a per inode basis. In the edge case where the inode type for a certain inode differs between bricks user intervention is required to fix the replication (basically : the user has to determine which source is considered authoritative for that inode, he does this by unlinking(deleting) the inode that is considered outdated).  In most cases the self-healing feature of the AFR translator should do it's job, but there are certainly scenario's where you could get into trouble.

I have some more questions (like "What's up with that geo-replication?" amongst others) on my list but I'll stop here. Hope it was of some use. Courteous Comments ? Pointers to more documents ? Don't Hesitate.

Sunday, November 27, 2011

A biography worth reading...

Sorry folks, nothing technical this time. I've recently read Mr. Steve Jobs biography by Mr. Walter Isaacson and it (among other things) inspired me to write this little blogpost. The "rules" below are in no particular order (so don't think rule #1 is the most important one) and they shouldn't be interpreted as axiom's of sorts. I just think it's nice to think about them once in a while.

Rule #1. "A-players hire A-players. B-players hire C-players who themselves hire D-players. It's important to have A-Players. If not,you'll soon have a bozo explosion in your organization which will eventually ruin the company"

Rule #2. "A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be" Accredited to Wayne Gretzky

Rule #3. "Simplicity is the ultimate Sophistication". Guess this one explains itself.

Rule #4. "Not. F**king. Blue. Enough!!!" Read the book to fully capture this one.

Rule #5. "It's our job to read things that aren't yet on the page" No market research. Just devote your time to whatever it is your heart is in.

Rule #6. "if you're not busy being born, you're busy dying" - Bob Dylan.

There's enough info the book to go on for 50 more "rules" or something. Anyway, I think you've got the message : This book is worth a read.