Category Archives: Architecture

Attaching a new device to a virtual machine in KVM

One of the features that KVM has, is the possibility of modifying some parameters or configurations of a virtual machine already created and configured (as it was shown earlier in this blog), which is something we need to address in order to handle new requirements.

It is important to note that this feature should work the same regardless if the host is a windows-based virtual machine or a Linux one, but in the case the host is a windows virtual machine, the latest libvirt libraries are required otherwise the virtual machine will be restarted.

Let’s first gather some information about the current disk:

$ qemu-img info vm-disks/debian.vmdk

image: vm-disks/debian.vmdk
file format: vmdk
virtual size: 1.0G (1073741824 bytes)
disk size: 933M

1.Creating a new virtual disk

Let’s follow the formula, indicated on the manual ($man qemu-img). The structure for the creation of a new disk should follow this schema:

create [-f fmt] [-o options] filename [size]

Which for this case will be
rmariano@Ubuntu:~>$qemu-img create -f qcow2 disk2.qcow2 100M

Formatting ‘disk2.qcow2’, fmt=qcow2 size=104857600 encryption=off cluster_size=0

Looking at the information about this new disk:
rmariano@Ubuntu:~>$qemu-img info disk2.qcow2

image: disk2.qcow2
file format: qcow2
virtual size: 100M (104857600 bytes)
disk size: 136K
cluster_size: 65536

2. Checking the new created disk
This virtual disk format allows to be checked, so we can do that by the following command:

rmariano@Ubuntu:~>$qemu-img check -f qcow2 disk2.qcow2

No errors were found on the image.

Ok the new disk was created, and it seems to be fine, let’s see how to attach it to the virtual machine.
3. Attaching the new device
Looking at the man mage for the command to attach the disk, we can see that its structure indicates:

SYNOPSIS
attach-disk [–driver] [–subdriver] [–type] [–mode] [–persistent] [–sourcetype]

rmariano@Ubuntu:~>$virsh attach-disk debian-vm disk2.qcow2 /dev/sda2

Disk attached successfully

It is important to know how to handle this kind of operations with the virtual machines in the environment, so that way the new requirements can be addressed rapidly. One important topic I would like to mention is the option to do this dynamically, but that covers a relative different scenario with some additional considerations. So far the main requirement was met.

Leave a comment

Filed under Architecture

A vision of Cloud Computing

I have been working and researching about cloud computing since a while, and besides all that we know now about this, it could be interesting to analyze what role it plays in the IT world, specially when this becomes an important part of the infrastructure, and therefore of the IT architecture.

We can find many different definitions about cloud computing, and all of them will be accurate, but the important idea is to understand the main concept behind this technical approach and how we can take advantage of it. This last point is as important as recognizing when we should not use this approach because it might not be a good solution. Said that it has to be clear that a good understanding of cloud computing will lead us to know when it is a suitable solution and when is not, and that is the kind of analysis I would like to make here.

When talking about a cloud computing decision one of the main questions we need to ask about is in which of the three well-known categories our solution will be running: *aaS (PaaS, SaaS, IaaS [01]). If the solution we are thinking for the cloud does not fit in any of this categories, this is a clue that maybe cloud computing is not the best option for our goal. The fact that cloud computing is actually great in many aspects does not mean that is the best options for all cases.

Cloud computing means the abstraction of some computational resources, making them available from everywhere. The main advantage of this is of course, the ubiquity. In addition as an owner for those services, I also have the advantage that I am not responsible for handling those services, for example, if I put my solution on the cloud I am delegating the administration of that infrastructure, so I do not need to have those servers, and manage them, and I have the option for paying only for the service it is used. This delegation of the service, is what impacts the most on the decision of whether to use or not cloud computing because it implies a potential security concern [02].

Another interesting feature of cloud computing is that allow to handle the scalability automatically, that means if my applications requires more resources this is handled automatically, otherwise, without having cloud computing implemented, I have to be responsible for that scalability of my application (with the economical implications that it has), and is important to know that many times, these kind of problems are show-stoppers.

There are many other aspects that a cloud computing implementation manages very well, like availability, performance, reliability, costs, etc. but besides that I think one of the most important aspects of cloud computing is the optimization of the resources that it makes. So I would say that this is a very good option for most of the systems, except when it implies an important security concern (in this case it could be also a suitable solution but it might require a more detailed analysis).

References

  1. [01] – Platform as a Service, Software as a Service, infrastructure as a Service
  2. [02] – Cloud computing security

Leave a comment

Filed under Architecture

A new SQL?

There is a new project called “newSQL” [01], and it seems its purpose is to redefine the adatabase access language for the information management in a relational database. According to the project site, the main reason for this project is based on the “drawbacks” of the current SQL syntax and implementation, and in the page there are many of those reasons enumerated.

Alternatively, there is a sort of draft with the proposed grammar for the “newSQL”, changing this way the nature of the SQL language completely as we know it nowadays. I would like to mention that some of the sample codes mentioned seem to be more on the object-oriented fashion, but it does not pretend to be an object-oriented database (I will follow-up on this point, in the next paragraphs).

About the questioned items
There are a number of reasons mentioned about why the current SQL standard is “wrong” or maybe not so accurate. Of course SQL is not perfect from my point of view (however it is an excellent language), but that does not mean that some changes are required, for example what is the problem of having a case-insensitive syntax on the SQL? is that something that really implies a technological limitation?

Most of this “reasons” are related to the current syntax of the SQL, and not to really technical issues. But after all, the syntax is also important, because it has to do in some way with declarative programming. Here SQL is a good (probably one of the greatest) example of declarative programming, because with this Structured Query Language the instructions are written in terms of how is the result that I want to obtain or change. Here it is the part I was mentioning about the new proposed syntax: if the data is structured in a relational manner and queried by SQL following that structure, why should be using instead another different syntax, like the object-oriented for something which is not engineered that way?

Justifying the change
Besides the syntactical reasons behind this project, it is true that different database engines might have some differences in their rules, but it is also true that for many projects (specially for projects made with high-level development frameworks) this is no longer a limitation.

For example, consider the case of a project which uses a relational database using technology like Ruby (presented earlier on this blog [02]) or some other framework which uses a ORM (Object-Relational Mapping [03]). It is true that here the syntax is object-oriented, but for another reason: the framework abstracts the database so the data could be managed in the domain model layer.

I think that the development of this kind of technologies like ORM’s, is something that tackled these “issues” earlier. Therefore most of this “reasons for changing SQL” were already addressed.

It is good to know, however that there is some perspective about that the current SQL is good but not perfect and it is also a good point that there is some project trying to improve this implementation. On the other hand, such a big change is not something easily achievable, and changing all SQL could be complicated at this particular point of time when it became a standard.

In addition, not all databases are about SQL, as there are other NO-SQL options [04]

I would like to mention that this is a good example of a free project development: even if this new SQL does not become a new standard, nothing will avoid to use it in some particular projects or other implementations.

References

  1. [01] – http://newsql.sourceforge.net/
  2. [02] – https://itarch.wordpress.com/2012/03/12/a-note-on-ruby-on-rails/
  3. [03] – http://en.wikipedia.org/wiki/Object-relational_mapping
  4. [04] – http://nosql-database.org/

Leave a comment

Filed under Architecture

A note on Ruby on Rails

As we may know, there are many technologies for the creation of a new system when the project is just starting, including programming languages, frameworks, toolkits, IDEs, etc. The same scenario applies when the project is a web system. And one of the roles of an IT architect (from my point of view) is to select the option that fits the best with the requirements, according to the constraints (the domain constraints and also the requirements from the stakeholders) and achieves the goal of the project in the scheduled time. That often implies to have knowledge about many technologies (probably regardless the level of expertise on each one, which might be different).

Therefore, is that we might be interested in learning about many web-based technologies if we are going to create a new web system from scratch, and one of this options could be ruby on rails. Please be aware that this article is not an extensive analysis of ruby on rails, but a high level review, because the idea is to present the concepts of this technology.

But first of all, what is this?
Well, on one hand we have Ruby [01], which is a high level programming language, and has a lot of functionality (it’s a bit similar to python, I have to say). On the other hand, rails is the framework used for ruby in order to create web systems. Here the analogy that could be made is that ruby on rails would be similar (or equivalent) to use python with the django framework.
So, we first need to know something about Ruby (a good point to start from could be found at [02]), and then move to the framework, in order to know how to proceed, logically.

Some interesting traits
At the beginning of the project, we need to create a directory where to store the files, which will be the root of the project. Once there, the framework works through some commands which perform the basic operations, such as creating objects, etc. This part reminded me a bit to symfony, a php-based web development framework because of the high level of the framework functionality.
The configuration and synchronization with the database is also performed by the framework with a command, and this is another good feature. A good point to mention here is that ruby on rails makes use of the active record pattern [03],[04] , as uses a ORM (Object Relational Mapping). Therefore we just need to care about our objects in the domain model (DM).
Said that, we can see that something like the following command
rails generate model User username:string first_name: string last_name: string
Generates a new model for a User (the initial uppercase is a good practice in naming convention for classes). After that we can perform many other actions like see the files that generates, check the model, etc.

Ruby has another interesting feature which is rake, a ruby-style tool similar to the Linux ‘make’. This command allows us to perform many operations such as migrate the database for example.

The amount of options that ruby on rails has is very extensive (not to mention the gems), so the most important to know about this technology is how it works, what architectural patters uses, what other technology is involved (like the ORM), because all of these are characteristics that will have a main impact if we are deciding whether or not use this technology for a project, because behind the decision there must be important reasons.

The idea was to present an overall discussion about this technology, and how it is related with the rest of the technologies and concepts of the software engineering. I found it very interesting and powerful, at least a good option to create web systems relatively quick.

References
[01] http://www.ruby-lang.org/es/
[02] http://tryruby.org/levels/1/challenges/0
[03] http://api.rubyonrails.org/classes/ActiveRecord/Base.html
[04] http://en.wikipedia.org/wiki/Active_record_pattern

2 Comments

Filed under Architecture

An Open source BI tool

Some days ago, I attended a presentation about Pentaho, a BI (Business Intelligence) platform for creating business intelligence solutions by managing the information based on the business requirements.

The presentation showed the integrations tool (kettle) and the reporting management one. I’d like to share my comments regarding the first one.

The tool itself provides a lot of options, and tasks as well as a great support for a lot of database engines. One of the best features was the great management of the source files: it supports a lot of input files, and its configuration is made very easy (it automatically handles the lines, and the end of the file, etc. without having to manually configure this kind of parameters which may lead to issues in the control flow of the program). Another great feature is that supports and handles the data changes automatically, which means if there was some difference between the source and the target it gets applied on the next execution without having to check this differences.

As the number of tasks required for the same operation reduces, the performance of the entire process might increase, and it is always better to have some kind of operations supported by default because this might mean that it is already optimized. It is also true that the overall performance of the solution relies on the rest of the technology involved (like the databases used), but the fact of having that support is a great options, as well as many others that this tool has.

Based on the information regarding this tool, it would be useful  to consider it advantages and disadvantages in order to make a good decision if this might apply for a particular BI project.

Leave a comment

Filed under Architecture

Python development in eclipse

Vim is a great text editor with a lot of useful functionality, and very powerful, however sometimes we might want to use another tools from programming such an IDE (Integrated Development Environment), and in this field one of the most popular (probably the most), is the well-known eclipse editor. Luckily, there is a very popular tool (which I recommend) to develop in python by using eclipse: this is PyDev (http://pydev.org/).

Pydev is installed as a plug-in for eclipse, so we need to execute the eclipse environment (as root users or admins because it requires such privileges). For example, eclipse could be started through the following command:

$sudo /usr/local/eclipse/eclipse
Where /usr/local/eclipse/ is the absolute path on which eclipse is installed.
After starting the program, we can go to help | eclipse marketplace. Once there, we can type the name of the plugin (Pydev) to start the installation.This installation follows the normal process, which involves selecting the package, accepting the licence and wait until downloads and installs.

When the installation is complete we still need to perform another step for linking the python build to the environment. This time is necessary to go window | preferences | pydev | Interpreter – Python | new and once there indicate the path of the python file, which should be something similar to :

/usr/bin/python2.7

This can be easily checked by the command:
$whereis python

After that we can start eclipse again, and use it normally. If everything is ok, in the menu options, you should be able to create a new python project, on file | new | project | pydev | pydev project.

As we know eclipse is a great option for developing software, and it is also useful to keep in mind another tool which can be used, despite the fact that vim is an excellent editor, it would not be bad to have another tool in the development toolkit.

 

Leave a comment

Filed under Architecture

Creating a new VM on KVM

The KVM (Kernel-based virtual Machine) virtualization technology has several options regarding the configuration and management of the virtual machines. In order to create a new virtual machine, we can use the VMM (Virtual Machine Manager), which is a tool with a great GUI, and very similar to the rest of the virtualization technologies tools (such as VMWare workstation, or VirtualBox, etc), but it is also possibleto create a new virtual machine without this GUI, through command-line.

In the case we use the GUI tool, the configuration is very similar to the rest of the applications, what has to be indicated are the disks, memory, network, etc, etc. Another trait, is that an existing virtual machine can be modified. Nevertheless, the alternative of the command line, could be useful in order to deploy new virtual machines faster.

This command-line based installation, is going to be accomplished through the virt-install tool, and we can check the manual page (man virt-install) to know more about this tool and its parameters.

In this case, the example is about the creation of a new virtual machine from scratch (a new Arch Linux OS), installing it by its .iso image.

One of the steps on the creation of the new virtual machine, is the installation, which will display a new screen for the new virtual machine, and this requires a program called virt-viewer , so we first need to install this before performing the installation:

sudo apt-get install virt-viewer

After installing this tool, we can continue with the installation, indicating the main parameters of the virtual machine. According to the man page we an indicate some of the following parameters, such as the name of the new virtual machine, the amount of RAM memory, the path of the virtual disk file (and its type, keeping in mind the types of file system supported by KVM), the network configuration (if we are going to use networking functions on this machine), and the CDROM (if any). other parameters can also be indicated for having a more detailed configuration, such as the boot order (–boot), the CPU architecture, or configurations, the screen, etc. It is noticed that the virt-install tool has a lot of possible allowed configurations.

In this case I will create this new virtual machine from the ISO image that I’ve downloaded, and I’ll proceed with the installation, so the command I ran was:

virt-install --name arch-vm --ram 64 --disk path=/home/rmariano/vm-disks/arch.vmdk,size=1 --network bridge=virbr0 --vnc --os-variant generic26 --cdrom software/archlinux-2011.08.19-core-x86_64.iso --boot hd,cdrom --cpu host

Where –name arch-vm means that the new name is called ‘arch-vm’, –ram 64 is the amount of assigned memory (64Mb). The following path, –disk path=/home/rmariano/vm-disks/arch.vmdk,size=1 indicates to create a new virtual disk typed on vmdk format (the format of vm-ware is supported by KVM), and with 1Gb of disk space. Then the networking configuration is indicated by –network bridge=virbr0 which means to use the bridge called ‘virbr0’ for this configuration (using a bridge requires root privileges, so this command must be executed with ‘sudo’ prefix due to this parameter), and there are also other possible options, like network=default (there has to be a default network configuration for the virtualization). The –vnc indicates the graphics, and then -os-variant generic26 means that the OS is a Linux with kernel 2.6.

After that, one of the most important parts is the CDROM parameter, in –cdrom software/archlinux-2011.08.19-core-x86_64.iso which links the CDROM to this ISO file, and mounts this way the image for the installation of the OS (the idea is that the machine will boot, then it will read the cdrom and the installation should start). However for this purpose, we need to ensure the boot order, and this is indicated in –boot hd,cdrom which means to boot first from the hard disk (the first time will be empty so it will continue with the cdrom), and then the CDROM. The last parameter, indicates the type of CPU used for the VM. In this case ‘host’ means to use the host CPU (the one of the hypervisor), and this is a great advantage in terms of performance (the instruction set will be more accurate), but it also makes this less multi platform (the CPU of the hypervisor may be different from one implementation to another, and this has to be considered if we want to migrate the virtual machine in the future – the migration of virtual machines is another interesting topic-).

Once this command was executed without issues, the virtual machine should be displayed:

Regardless the installation or configuration method, the virtual machine is created in the same way, and as it was mentioned in the first post about KVM the configuration about the virtual machine is stored in a .xml configuration file, which can be simple checked by using the ‘edit’ tool of virsh using the name of the virtual machine, in this case:

virsh edit arch-vm

Will show some of the main parameters, like for example the configurations:


Here we can see, the name of the virtual machine, the number of CPUs,etc. the rest of the configurations (disk, etc.) are also in this file, located by default in /etc/libvirt/qemu/arch-vm.xml , and a change on this file, will modify the virtual machine.

All in all, this is another option for the creation of new virtual machines, that shows how open is this technology is in terms of functionality, configurations, and management.

Leave a comment

Filed under Architecture

Data modeling

A few days ago I gave a presentation called “Introduction to data modeling”. It was a great opportunity to highlight the main aspects of a data model and why this one is important in every IT project. Based on the experience that I have in my project, I’ve highlighted ​that one of the most important parts of every IT project lies in the data and how this one is managed, therefore to have a data model is extremely important.

Several tools can be used in order to achieve the goal of managing a good data model and document the database, and those tools might help us to create a data model design, update it, and finally create the database. Nevertheless what I would like to share are the main operations to ensure a good data model, which in my opinion could be summarized into:

– Check the model: to validate the model is a very important task in order to avoid potencial issues or errors on beforehand.

– Validate the model: validate is not the same as verify. To verify might mean to review the model (as in the previous step), which means to ensure that the model is correct (in technical or logical terms), but to validate means to verify or ensure that the model responds to the business rules accordingly (the model could be technically valid, or well constructed, but this does not necessarily means that the business rules were modeled properly and that the requirements were well defined).

– Allow reverse engineering: the data bases are often changed or modified and these updates must be reflected in the data model, which also has to be up to date. Reverse engineering means that the model could be retrieved based on the current data base state.

– To have a good data dictionary: this item (along with the previous one), is related to the documentation of the data model, something that is very important within the data architecture. Having the data documentation up to date, is critical as well as applying the naming conventions. The data dictionary is useful every time we need to interact with the data (for any DML – Data Manipulation Language – operation), while the naming convention helps to maintain the model consistent.

There might be other items to mention, however I would like to share here, those that has the most relevance in my opinion. Data modeling is something that has to be very well understood by every member of the team, because one way or another, all parts of the project are related to its data (that’s for what information systems are, after all), and regardless if we are creating a new data model or working on an existing one, the rules of relational data model are the same, and should be understood in the same way.

Leave a comment

Filed under Architecture

User Mode Linux Virtualization

It’s UML, but it’s not for Unified Modeling Language, instead is for User Mode Linux. What is User-Mode Linux? Well, it is another virtualization technology, but probably a different one. So, what’s the idea behind UML? In this case the virtualization process is implemented running a Linux kernel over another one (this last one is the host’s Linux kernel), and therefore we have an implementation or an instance of a Linux kernel running as a process for the host operating system. This kind of virtualization, is possible to be implemented in a Linux system, and the main idea is to run Linux over Linux, making a good use of the resources and letting the virtual kernel to easily communicate to the real kernel, in order to perform faster and to achieve a great throughput.

How is this achieved? In this virtual environment, the Linux kernel will run as a file, like everything in a Linux environment, so in a simple way we can say that this file (compiled and packaged) is able to run on the operating system, but this is not enough because it will also require some other components to work with, such as disk, etc. But as a matter of fact, this other components are also files (the Linux operating system also interprets this components as files, so it’s a similar situation). So basically, in order to run a virtual machine, we will need the compiled kernel version (in a file), and another file for the disk (if we just want a simple system, like in this example). After collecting these files, all we have to do is to run the kernel file (execute it from the terminal bash), and then indicate the main system characteristics per parameters (such as the disk file, amount of memory, etc).

Let’s see this example implemented:
1. Getting the files
First, it’s necessary to get the kernel source code, which can be downloaded from http://www.kernel.org/
After downloading the kernel files, it’s necessary to check it and compile it.

rmariano@Ubuntu: tar -xvjf linux-3.0.1.tar.bz2
rmariano@Ubuntu: cd linux-3.0.1/

The entire process of creating a virtual machines like this one, requires to get the code, compile it for the architecture (in this case ARCH=um , for user-mode) through the .config file, apply the patch, and that will create a new file (with the name of the machine) that will create the machine when it runs. The kernel can be compiled and configured from scratch but it requires several parameters, so the first time it would be a good idea to use the existing .config file located in: /usr/src/linux-headers-`uname -r`/, which is the file for the current implementation.
In this case we are going to use an already existing file for the virtual machine, and link it to another different disk file to run it.
Besides the kernel-vm itself, a file system is required, and for this we can get one from http://fs.devloop.org.uk/

In this case I have downloaded the Debian file system but another one can be used. This file for the disk can be also created manually from scratch, but that will require a further analysis.

So in order to run the virtual machine, it’s enough with:
./linux-2.6.24-rc7 ubda=Debian-5.0-x86-root_fs mem=32M
Where linux-2.6.24-rc7 is the name of the virtual machine kernel process, Debian-5.0-x86-root_fs is the disk file (the downloaded and uncompressed file) with the file system and the OS installed, and mem=32M means that in this case I’ve assigned 32M for the RAM.

Once this command is executed the virtual machine should start running.
2. Analysis
Now this virtual machine is running inside the bash command-line we’re using as a regular process, so we can quickly check this:
rmariano@Ubuntu:~>$ ps -fea
And you will notice that there is a line with this name of the process, like the following:

./linux-2.6.24-rc7 ubda=Debian-5.0-x86-root_fs mem=32M

In addition if we check where this machine is running we’ll see that this process is under the bash terminal. The following capture was taken after the $pstree command:

In this case I’m running a linux kernel version 2.6.24 , while my version in the host system is 3.0.0-12.

Finnaly let’s see what kind of file is the disk:
rmariano@Ubuntu:~>$file Debian-5.0-x86-root_fs

Debian-5.0-x86-root_fs: Linux rev 1.0 ext3 filesystem data, UUID=a6c6a63b-a20b-4e87-8e23-50b80febcb5c, volume name “ROOT”

So it’s actually a file “file system typed”, and it has its own information as metadata.

3. Conclusion
What can this be used for? This is another virtualization mechanism (it’s a paravirtualization method of OS-virtualization) and as every one, it has all of the advantages of virtualization, with the exception that is only allowed in Linux environments. But there is another interesting point: this requires the kernel compilation, so this can be used for testing different kernel versions after some changes without jeopardize the real machine (the host), and it’s also easier and faster to run and test.
 

Leave a comment

Filed under Architecture

Introduction to KVM Virtualization

KVM (Kernel Virtual Machine) is a very interesting open source virtualization technology.

Virtualization is the concept of creating a virtual component (in most cases a machine, called virtual machine), based on an abstraction, and to use that component for specific purposes making use of the advantages that the virtualization allows. The virtualization started with the concept of TSP (Time Sharing Processing), and later with the CTSS (Compatible Time Sharing System) developed by the MIT. This idea of taking a system (a machine) and create it as a process for another physical machine became a great idea, but in this case what I think is even a greater idea is the part of the kernel: the Kernel Virtual Machine.
This means that the virtual machine runs as a process for the host OS (that’s not something new), but what is new is that the software that supports this virtualization, the hypervisor is a module in the host kernel, that means that it is a module supported by the Linux Kernel.

Having the virtualization supported by the kernel is a big advantage because it allows to the virtual machines to perform faster and it also allows to the hypervisor (the VMM, Virtual Machine Monitor) to handle the resources better, making use of many great functionalities such as the kernel same-page merging, a better ballooning system, etc.

Let’s see how to start with KVM:
1. Checking the required hardware for KVM virtualization
We need to ensure that the CPU supports virtualization, that means that has the right instruction for supporting hardware virtualization (the VT in the case of an Intel CPU, or AMD-V in an AMD one). This can be checked by the following bash command:

egrep '(vmx|svm)' --color=always /proc/cpuinfo

We should be able to see the instructions highlighted in red.
Then we can check if the processor supports long mode (lm)
cat /proc/cpuinfo | grep -c lm
If we get a value >1 then it is supported, and we can continue with the installation.

2. Installing KVM
It is necessary to install some packages through the following command (the following example shows the commands for a bash terminal in Ubuntu, but the same can be done for another Linux system):

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

This will install the required tools to start the virtualization.
After the installation, the programs will require a user that belongs to the same group as root, and a set of configuration but they are automatically done in Ubuntu, so all you need to do in this case is to sing-off and then sing-in again, so the changes get applied.

3. Checking the virtual machines
You can check the status of the virtual machine with the following command from the terminal
virsh list --all
This will display all the virtual machines and the status of each one, including those that are off at that moment. So in my case I got something like this:

rmariano@Ubuntu:~>$virsh list --all
Id Name Status
----------------------------------
- debian-vm shut off

It is important to mention that the virsh application is the one that handles the virtual machines, and through this application we han manage each one. As this is an application, you can enter directly into virsh and handle the commands from there, in that case the prompt will change, just like this:

rmariano@Ubuntu:~>$virsh
virsh #

You can type ‘help’ to get the list of the entire commands, and if you need to query by a particular command you can use: help , such in this case:
virsh # help list

The command has its own man page, that you can query from the bash terminal, and there you will have the entire information, just type: man virsh

4. QEMU
We have also installed qemu, but what is this? qemu is an emulator that will allow us to create all the hardware platform that we are going to use for the virtual machine such as the disk, the images, and the vm itself.
QEMU presents an abstraction of the hardware for the virtual machine, and it also saves its configuration, in .xml files.
There are many ways to create a virtual machine, it is possible to create the hardware and then install the vm using the virt-install tool, or create it by the virtual machine manager, an application with a GUI for managing the virtual machines.

I would like to review the command-line tool further, as it is better to start in this first opportunity with another approach in order not to extend the explanation.
So, the next step will be to install the Virtual Machine Manager, an application that uses libvirt libraries, and it is handled by RedHat.

After installing this application you will see how easy is to create and deploy a new virtual machine from scratch, and it’s very similar to the rest of the virtualization applications.

All in all, the most important that I would like to mention is about the great features of KVM in terms of its very good integration with the host OS, and the great management features and options. In addition it’s extremely configurable and that is very powerful, and as a disadvantage I can say that all processes or configurations have to be performed manually and that is often more difficult, but it’s also a great opportunity to learn what really happens behind the VM.

Leave a comment

Filed under Architecture