How to pass empty lists as default argument in Python

I’ve written a gist briefely explaining why is not a good idea to pass empty lists as a default argument for a function in Python.

The gist can be found at:

Leave a comment

Filed under Uncategorized

Vim and productivity

It is often said that only brand new tools drive productivity in software development and in this regard, many people tend to discourage an editor like Vim. The truth is that, vim is an incredibly useful tool, and sometimes even more powerful than many “modern” editors (or IDEs).

I have nothing against IDEs (in fact I wrote a post about pydev for Eclipse, in the past), so the point here is not so discourage their use, but to demystify many ideas that most people have about vim.

There are many reasons why learning vim could be beneficial. Perhaps the most important one is that although it might be hard at first, once you have mastered it, you become more productive, and you notice you are able to do things you would not be doing with different tools, or at least not so fast. In addition, after learning some basic commands, the rest of the learning process becomes easier.

Here are some of the most significant reasons for learning vim:

* Vim is generic: this has many interpretations. While most IDEs or editors are related to one particular programming language, vim is abstract and adaptable to any particular technology, which means that probably in the future most of the current tools might not be relevant, while vim will remain valid, as it has been since it was built. On the other hand, it is also highly adaptable, extensible and customizable for the user purposes or preferences.

* Ubiquity: Vim (or vi) will be present on every Unix-like server you might need to deal with. In this case, if you need to edit some file(s) on the server, this is your only choice (you could have Emacs, but again, the principle is the same, in fact almost everything here explained is applicable to Emacs as well, the point is to select one these tools as your editor).

* Productivity: again, after learning how to do certain operations, you will notice that you can do things faster in vim compared to other tools.

* Customization: vim is highly adaptable, so you can have your own configuration tailored for the operations you do more frequently and according to your preferences.

Learning and using vim is worthwhile, and as opposite to what many people could think when it first starts with the editor, you don’t actually have to learn all the commands or operations at once, but just those that are useful for you according to your work. This means, many users might have a different set of “favourite” settings or known commands.

If you are a professional and you are already using Vim, it would be great if you could share your thoughts and opinions or experience with this editor. If not, I encourage you to start with vim, at least until you reach the point when you notice that you are doing things better (which might not be soon, be patient, it could take a few hours or days).

Leave a comment

Filed under Uncategorized

Google Charts are handy tools

I have been experimenting for a while with Google Chart Tools [01] and they are a powerful tool to apply on many situations when we might want to present data in a nicely and more readable manner.

It causes a much better impact to present a chart rather than the raw data. Therefore the task of adding graphics might be a repeatable task, but of course, we would like to perform this in an easier way. This is why the fact of having a tool on the toolkit comes as a great thing, and besides the particular technology you use (Google charts are not the only technology for this), what is important (from my point of view) is to have something located so you can call it when you need it.

This particular set of tools are very powerful (because most of the reasons I explained on the previous paragraph), but it is also very simple to use (it just requires a few examples and an understanding of JavaScript, but after identifying a pattern, it becomes easier).

It displays the data based on sources which could be data tables or data sources (bases, etc.). Regardless the underlying data source, I would like to present how is the process of displaying the graphics and perhaps let the data sources for a further post.

Getting started with a simple example
The steps for achieving this are actually simple. It requires to add the Google-chart libraries and after they are linked, create the structure the chart will have, fill in the data, and then define the HTML section (div) on where the graphic is going to be displayed (the part on the DOM identified by an id, that will be referenced within the JavaScript code).

google.load("visualization", "1", {packages:["corechart"]});
function drawChart() {

var data_table =
[ ['col1' , 'col2' ]
, [ x11 , x12 ]
, [ x21 , x22 ]
, [ x31 , x32 ]

var data = google.visualization.arrayToDataTable(

var options = {
title: 'Function Graphic',
curveType: 'function'


var chart =
new google.visualization.LineChart(
chart.draw(data, options);

Then with this generic code skeleton (along with other HTML elements that are required) we just create an element on the DOM with the id “chart_div” (for this case, as it was references on the chart function), and the graphic will be displayed.
One important thing is the way we define the data values on the variable, is that the function expects a list of list where the first element refers to the header and the rest of them are the pair-values that will be displayed. For all the kind of graphic that this type of information applies, we could change the type of chart (instead of LineChart, for a different one, if applies).

For example, by following a structure like this one (it’s not the only one, different kind of graphics might require a different structure, but it is a common programming idiom for cases like this one), we could create a representation of a mathematical function (x² for example).

The code would be like the following one:

google.load("visualization", "1", {packages:["corechart"]});
function drawChart() {
var data_results = generate_values( f , -10 , 10 , 0.5 ) ;
var axis = ['X' , 'Y=f(x)'];
var table = data_results;
data_results.unshift(axis); //add the header at the beginning of the table

var data = google.visualization.arrayToDataTable(

var options = {
title: 'Function Graphic',
curveType: 'function'


var chart =
new google.visualization.LineChart(
chart.draw(data, options);

function f(x){
return Math.pow( x , 2 );
function generate_values( fn , startValue , endValue , pace ){
var results = [];
for(var i = startValue ; i <= endValue ; i += pace ){
results.push([ i , fn(i) ]);
return results;

The only difference here is that an auxiliary function is used for the generation of the values to represent, and it works for any kind of function passed by parameter (courtesy of JavaScript), but all in all this function returns an Array with the required structure, and the headers are added at the beginning of this array. Therefore, the resulting object is a valid structure for what the library needs, and there is no issue with the rendering.

The options parameter is an object that handles and owns the properties of the chart to draw. All type of charts share some generic options, but they also have particular private ones depending on the type of graphic. The details and explanation of each one are perfectly documented on the API page.

Although there are many resources within this library, explaining all of them would demand much time. Therefore is a good point to make on saying that probably one of the greatest things this has is a common structure, or idiom that works as a starting point and allows to initially know how to work with the tool.


  1. [01] –

1 Comment

Filed under Uncategorized

Node.js and server-side JavaScript

It is really clear now the importance that JavaScript has nowadays on the IT world for building a web system .One thing that it is not always clear is that despite the fact that JavaScript is most of the times used for client-side executions or for adding logic at the client end (which has a lot of advantages by the way) this is not the only possible implementation, so there is also possible to execute JavaScript code on the server side.

This is the approach taken by node.js, which is a platform for building applications based on JavaScript code, so yes, it executes JavaScript on the server.

According to it’s web page [01]

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications.[...] “

Despite the fact that it is a technology for building applications on a web architecture, it has many other possible features or things that can be used for. For example, by installing node.js I could run my JavaScript code locally in a terminal which makes probably much more easier to run it and debug it. This is an interesting point, however it is still limited to just pure JavaScript code, because things like the DOM are still not possible to be simulated or executed locally (but there are projects to make this possible in the near future).


Recalling the main idea of presenting node.js, probably the most important point to make is the fact that it allow us to create new web applications with a different approach. What do I mean by this? Well, the entire architecture of node.js is very different from the classic paradigms or languages like Java/Ruby/PHP, etc., because it works by events (but that is a completely different topic, out of the scope of this introduction).


An important note is that node.js is based on the V8 JavaScript virtual machine, an open source project from Google [06].


A simple example

Installing node.js is quite simple. We first download the source code from the web page [01] and extract it. After this, once in the local folder of the node.js files we proceed with the installation:



$sudo make install


Now we can run the node.js console on the terminal, with $node

(the “>” prompt is from the node.js console).


By this way it is possible to test a JavaScript file with the console, for example for a simple file like script.js, the following command will execute it:

$node script.js


But this is for trying JavaScript locally, which is great but it is not the only possibility, as node.js goes beyond that for being a development platform itself, so let’s try to build a simple example.

This simple example does not intend to be something really complex, on the contrary is just an small piece of code with the purpose of illustrate a simple application made running at the server side and written in JavaScript. This brief example runs a service with returns in HTTP the URL requested by the user while browsing it, and it logs that same request on the nodejs console. The code is the following one:


var http = require("http");
var url = require("url");

function onRequest(request , response){

pathname = url.parse(request.url).pathname;

console.log("Index started - "+ pathname + " requested");

response.writeHead( 200 , {"Content-Type":"text/plain"});

response.write("Hello. Your request was " + pathname);




console.log("server started");


I called this file index.js and executed it by:

$node index.js


So in this case, we could load the application through http://localhost:1234 and we will see the requested url as a response, and then by requesting any url like http://localhost:1234/hello the result would be

Hello. Your request was /hello


This is something that could be useful for other functions like redirect, etc.


Running applications like this one will require a particular (but simple, in general) configuration and set up of the environment as well as particular services (some of them are indicated on the references, they are mainly cloud computing SaaS solutions).


All in all, the main idea was to present an original, relative state-of-the-art project which shows a new face of JavaScript in a way which was not so commonly used until now. There are lots of resources and a lot of information out there regarding this, but I believe it is important to have an initial knowledge to start from.



[01] –

[02] -

[03] –

[04] –

[05] –

[06] –

Leave a comment

Filed under Uncategorized

Highlights of Richard Stallman’s talk – Part II

As it was presented on the previous post, the last talk that Richard Stallman presented, that I attended, had many important points.  Regarding this talk and the set of ideas that were presented, I would like to make a point with respect to the topic on that teaching at schools or college must be done by using open source software, because I consider this idea one the most important of the talk.

I totally agree with the idea that the role of educational institutions  must be to educate independent people and future professionals capable of working with a particular criteria. By educating through a set of private software, what is actually being performed is an education dependent of some particular vendor. The example given was the following: “let’s say a student on the school is using and studying some particular software, but he/she is curious about how this one performs some particular functionality, so he/she asks the teacher about the software and the part in question like ¿how was this made? ¿what makes it work this way? – But the teacher is only able to reply that it is not possible to know that as the code is closed and not available for every one”. Every piece of software introduces a new concept or knowledge, and this cannot be closed to the users or people, he said.

This is something that might need a review here from now, and it is closely related to the use of technology in education, something which is now starting to develop new paradigms .

Leave a comment

Filed under Uncategorized

Highlights of Richard Stallman’s last visit – Part I

Last Friday I attended a talk given by Richard Stallman, about software freedom, but most important, he gave a presentation focused on what is freedom in general regarding technology. It was an excellent opportunity to see and listen to such a great referent on this topic and the talk was perfectly presented.

It was a talk of two hours extent, and after that he answered questions from the audience (there was a lot of public).

I would like to highlight here, the main ideas which I believe reflected the most important parts of the talk, as well as many other ideas that I believe reflect the fundamental parts of his work and goals.

Here are the main ideas I would like to sum up:

  1. Free regardless the price: When talking about free software or giving freedom to the user, it is not intended to say something regarding a price or a fee, there are private software that might be free, or open software which has a fee, but it means something different. The accurate meaning is “free as in freedom“, so it is about not to make the user dependent of some particular programmer/vendor.
  2. It is about to give the power back to the user: software which is not open, dominates and controls the user’s activity as the user is not aware of the complete behavior of that software component.
  3. The importance of free software in educational environments: educations institutions have a set of core values with the society, so it would not be correct to teach on some technology which is not free, because that will create a dependency on the future of those students after they graduate. Education and knowledge is about liberty, so it would be contradictory to teach something that at the ends relies on the plans of a corporation. In addition, government institutions must use open software.
  4. The information of the public is being gathered without notice: This is something which has an impact not only on software but also on society in general, as many governments are trying to identify citizens by new databases without communicating properly the reasons, and without informing about the security levels of those systems.

There are more topics that were exposed, with more of information which I don’t want disregard. In addition it is well-known that the ideas of Richard Stallman goes beyond this talk, as he has been working on this initiative since many years.
Therefore I will continue with this highlights in a further post.

Leave a comment

Filed under Uncategorized

A conceptual view about multi-paradigm programming languages

Sometimes there is a sort of concern about some programming languages that allow the programmer to build a solution not just in one single paradigm, but with the features of more of them. Maybe this started a while ago with some popular technologies and then we had the case of c++ on which, as an extension to c, we can develop either in a structured manner, object-oriented, or both.

Nowadays there are more technologies, and the example that I would like to mention is python. According to many definitions or references, python is a high-level, multi purpose programming language, which supports many paradigms, like object-oriented (probably as the main one), imperative, and functional, for example. Well this is the kind of analysis I would like to make, about if this is good or bad.

It is often said that this is not a great option, because it allows the programmer to “break” a paradigm or not to have a “pure solution”, for example: if someone is building a software and decides to make an object-oriented design, nothing will prevent or deny the programmer to start using some imperative idioms (within the source code), and this is often taken as a drawback. I think the problem is not about the technology itself (in this example python, but this same principle could apply to any other programming language like this one), but of the programmer or designer or architect which decides to do this. To respect the paradigm in a pure closed sense or not is a responsibility of the programmer, so it is not correct to say that some technology is not good just for the fact that gives more powerful to the programmers. In addition, this could be be a good option sometimes, if we really need cross-paradigm functionalities.

The detractors of this kind of technology alleges that this is not good: one paradigm or another one, that’ is an interesting (and valid) point of view though. However, despite the fact that object-oriented design/programming is a great option when building a system, is not the only one. And even more: is not always the best option. It is not true that always for all kind of systems, to build it in with object-oriented model will be better than following a different approach, because different domain problems have different structure.

I really like object-oriented design/programming, but I am also aware this is not the only possibility, and that some times there are better options for the problem we need to solve.
Alan Kay said “a change in perspective is worth 80 IQ points”. I agree with that. So this could also mean that the object-oriented approach is not the only perspective, and if a programming language allows me to implement a solution by taking concepts of several paradigms, I think that is an advantage in comparison with other programming languages which are only object-oriented (and besides this is good for a pure objects model, I repeat, it is also a limitation because one day I might need another kind of solution).

The idiom “everything is an object” is valid, but within the object-oriented paradigm, so I think it could be better to have a wider landscape.

Now the discussion should be how to make use of this technology, because this has two different interpretations:

  1. Either we use this multi-paradigm possibility to develop a solution that has concepts of these paradigms (like developing a solution with object-oriented and functional traits, for example), or
  2. It is interpreted that this multi-paradigm trait is useful for different projects (each of them will follow just one single paradigm, without escaping of it), but maintaining the same technology along this project portfolio.

Regarding the item 1, I think that it is also a great feature but I could be better to take this approach as a last resort. This is because to keep the model in one paradigm will make a more coherent solution and probably more easy to maintain in the long-term.
It is a difficult decision to know when to follow or not with this approach, because this is precisely what is under question, and the part which is left to the architect/programmer. Remember, is not always a bad solution, sometimes is the correct answer.

Leave a comment

Filed under Uncategorized