Tag Archives: network

Flying patterns time-lapse

You can give thanks to Dennis Hlynsky for these beautiful images.

The next video is just genius. Basically, he wants to find out if the fruit flies crawl random or if they actually walk over all the surface of the fruit. This is the result.

He tries similar experiments with ants… but the result is not that impresive. Basically the area is very small and there is nothing interesting in there to see ants start forming patterns (like when going for food).


maths with Python 6: Twitter API – Tweepy for social media and networks (with Gephi)

We are back again with another Python tutorial. Did you remember the other ones?

maths with Python

maths with Python 2: Rössler system

maths with Python 3: Diffusion Equation

maths with Python 4: Loading data.

Raspberry Pi 001: Setup and Run first Arduino-Python project.

Raspberry Pi 002: Pi Camera, start up scripts and remote desktop

maths with Python 5: Double Compound Pendulum Chaotic Map

In this one we are going to work with Twitter API and Python.

TWWITERAPIFirst of all, what is an API? In short, an API is a library that allows one software to communicate with another software/hardware. In this case, Twitter API allows to communicate with “Twitter” thinking of Twitter as a software.

Second, for what coding languages there is libraries? This question depends on the developers and changes quite a lot. In the Twitter Developers webpage there is a list of the available libraries for different languages (that doesn’t mean there is no more than those, possible there is thousands made by amateurs but are not updated regularly).

We are interested here in the libraries for Python:

  • tweepy maintained by @applepie & more — a Python wrapper for the Twitter API (documentation) (examples)
  • python-twitter maintained by @bear — this library provides a pure Python interface for the Twitter API (documentation)
  • TweetPony by @Mezgrman — A Python library aimed at simplicity and flexibility.
  • Python Twitter Tools by @sixohsix — An extensive Python library for interfacing to the Twitter REST and streaming APIs (v1.0 and v1.1). Also features a command line Twitter client. Supports Python 2.6, 2.7, and 3.3+. (documentation)
  • twitter-gobject by @tchx84 — Allows you to access Twitter’s 1.1 REST API via a set of GObject based objects for easy integration with your GLib2 based code. (examples)
  • TwitterSearch by @crw_koepp — Python-based interface to the 1.1 Search API.
  • twython by @ryanmcgrath — Actively maintained, pure Python wrapper for the Twitter API. Supports both normal and streaming Twitter APIs. Supports all v1.1 endpoints, including dynamic functions so users can make use of endpoints not yet in the library. (docs)
  • TwitterAPI by @boxnumber03 — A REST and Streaming API wrapper that supports python 2.x and python 3.x, TwitterAPI also includes iterators for both API’s that are useful for processing streaming results as well as paged results.
  • Birdy by @sect2k — “a super awesome Twitter API client for Python”

Here we are going to work with Tweepy, but you can try with any of the others, they should work in similar way.


step 1 : Install Tweepy library into your Python distribution (I usually use Anaconda from Continuum Analytics). Just open a command window and

easy_install tweepy

It should install the library without any problems.

command window

step 2: Get a Twitter account (You can skip this step if you already have one). I made @Brickinthesky for that.


step 3: Go to Twitter Developers webpage and get a code to access Twitter through the API with your Twitter account. First you need to Sign in with your Twitter account.


Once Signed in, you can click in “My applications” to create the key codes.


Since it is the first time you try to use the API, there is no Apps so, just click and create a new one.

003 Fill the requested details and don’t worry too much if you don’t know what to put there. For instance the website says it can be changed later.



Hit create and voilà!


See the red arrow? It indicates a field you need to change in order to be able to send and receive data from Twitter. Click on it and change to Read and Write.


Update the settings and click in the API Keys. Here scroll down to “Create my access token”.


Possibly you will need to wait a little bit… and update. And… here they are, the API key and secret, and the access token and secret.


step 4: Get the tweets in your public timeline. Go back to Python and with this code (where you have to use your API key and Token key) you will be able to see your timeline.

Get Timeline:

import tweepy

# Copy the api key, the api secret, the access token and the access token secret from the relevant page on your Twitter app

api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

# You don't need to make any changes below here

# This bit authorises you to ask for information from Twitter

auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)

# The api object gives you access to all of the http calls that Twitter accepts

api = tweepy.API(auth)

# Retrieve the last 20 tweets from your timeline
public_tweets = api.home_timeline()

# For each tweet in your timeline. Print out the tweet text
for tweet in public_tweets:
    print tweet.text

And the result is the tweets in your timeline will appear in Python.


step 5: Let’s get the tweets from another account. Let’s try Neil deGrasse Tyson which Twitter looks like this right now.


Get his id below his picture “neiltyson” and put it in this code

Get ID Timeline

import tweepy

# Copy the api key, the api secret, the access token and the access token secret from the relevant page on your Twitter app

api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

# You don't need to make any changes below here

# This bit authorises you to ask for information from Twitter

auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)

# The api object gives you access to all of the http calls that Twitter accepts

api = tweepy.API(auth)

# Now, instead of getting your own tweets we are going to gather another users tweets.
# Look here for help http://tweepy.readthedocs.org/en/v2.3.0/api.html
tweets =  api.user_timeline(id='neiltyson')

for tweet in tweets:
    print tweet.text

And after running the code…. here they are deGrasse tweets


step 6: Now let’s publish something in Twitter using Python. For that I know that the correct instruction is “api.update_status” but I don’t know how to use it, so lets look at the API reference.


Hmmm it looks quite simple, just….

Publish status

import tweepy

# Copy the api key, the api secret, the access token and the access token secret from the relevant page on your Twitter app

api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

# You don't need to make any changes below here

# This bit authorises you to ask for information from Twitter

auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)

# The api object gives you access to all of the http calls that Twitter accepts

api = tweepy.API(auth)

api.update_status('First status from Python')

and it appears in Twitter.


 step 7: Generate and upload a picture from Python. For this we are going to create a graph and upload it into Twitter. Simply…

Publish image

import tweepy

# Copy the api key, the api secret, the access token and the access token secret from the relevant page on your Twitter app

api_key = 'xxxxxxxxxxxxxxxxxxxxxx'
api_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

# You don't need to make any changes below here

# This bit authorises you to ask for information from Twitter

auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)

# The api object gives you access to all of the http calls that Twitter accepts

api = tweepy.API(auth)

import matplotlib
from pylab import *
import numpy as np

x = np.linspace(0,2*pi,1000)
y = np.sin(1/x)


api.update_with_media('C:\Users\Hector\Documents\Python Scripts\graph.png','Uploading a graph!!')

And this is the result. Isn’t it cool?


step 8: Networks. Basically we want to see how the followers of somebody are related. To do that we are going to use…. the free open-source software for networks visualization…. Getphi.

gephiJust download and install.

And now we need to know how to generate files for Gephi using Python.

To do that we are going to create CSV files. Which are the most easy ones (list of supported files).  It seems that something as simple as…. create a txt file, write inside:


save it as *.csv

Go to Gephi and open it. (Ignore warnings, just hit Ok button). To display the graph… go to Preview, add the names of the nodes and update… and voilà!


Cool!! Now back to Tweepy. We are going to use the tweepy comand “api.followers(user)” to see the followers of a particular user. And to save the data into *.csv files we are going to use the csv library (which doesn’t require installation, at least with this Python distribution). The final code is basically an idea of how to draw fractals. Basically you create a function that calls itself inside of it (with one “counter” that reduces each time the function calls itself, so it can only autoreference a finite number of times). Here is the code:

import tweepy

# Copy the api key, the api secret, the access token and the access token secret from the relevant page on your Twitter app 

api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 
api_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 
access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 
access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 
# You don't need to make any changes below here # This bit authorises you to ask for information from Twitter 
auth = tweepy.OAuthHandler(api_key, api_secret) 
auth.set_access_token(access_token, access_token_secret) 
# The api object gives you access to all of the http calls that Twitter accepts 
api = tweepy.API(auth) 

#User we want to use as initial node 

import csv 
import time 
#This creates a csv file and defines that each new entry will be in a new line 
csvfile=open(user+'net4.csv', 'wb') 
spamwriter = csv.writer(csvfile, delimiter=' ',quotechar='|', quoting=csv.QUOTE_MINIMAL) 
#This is the function that takes a node (user) and looks for all its followers #and print them into a CSV file... and look for the followers of each follower... 

def fib(n,user,spamwriter):
    if n>0:
        #There is a limit to the traffic you can have with the API, so you need to wait 
        #a few seconds per call or after a few calls it will restrict your traffic 
        #for 15 minutes. This parameter can be tweeked 
        time.sleep(40) users=api.followers(user) 
        for follower in users:
            #n defines the level of autorecurrence 

This will create a *csv file which you can open in notepad and it will look like…


And once loaded in Gephi… voilà!!!


Hope you like this very long post and helped people get used to work with the Twitter API.


It has been a while since my last MATLAB post so…

I was trying to retrieve data from the National Weather Service  for another post when I found a protocol.


The protocol is called SOAP (Simple Object Access Protocol). In simple words, It defines an standard procedure to communicate information between different computers over internet. (I’m not an expert, so I’m basically learning while writing this).

So, let’s make it simple. Suppose we have two computers, HOME and DATA_SERVER. We have a program in HOME that wants to retrieve some data from DATA_SERVER, maybe raw data and some basic operation as counting the number of elements. How to do it? The HOME computer can send an XML document to DATA_SERVER and DATA_SERVER will read that document and execute the instructions in it. Most probably, returning some data. An XML document is simple a kind of text file that uses indentation and mixes code with normal language. Something like this:


Now, because it will be a complete mess with many different systems and programs and ways of writting code, there is an standard on how to writte that XML document, is called SOAP. SOAP describes a standard way of writting the XML document so it can be understanded by a wide range of programs, and if there is some update in the program, the XML will still work. A XML document written according to SOAP will look something like this:


Or being more specific, something like…

<pre class="de1">POST /InStock HTTP/1.1
Host: www.example.org
Content-Type: application/soap+xml; charset=utf-8
Content-Length: 299
SOAPAction: "http://www.w3.org/2003/05/soap-envelope"

<span class="sc3"><span class="re1"><?xml</span> <span class="re0">version</span>=<span class="st0">"1.0"</span><span class="re2">?></span></span>
<span class="sc3"><span class="re1"><soap:Envelope</span> <span class="re0">xmlns:soap</span>=<span class="st0">"http://www.w3.org/2003/05/soap-envelope"</span><span class="re2">></span></span>
  <span class="sc3"><span class="re1"><soap:Header<span class="re2">></span></span></span>
  <span class="sc3"><span class="re1"></soap:Header<span class="re2">></span></span></span>
  <span class="sc3"><span class="re1"><soap:Body<span class="re2">></span></span></span>
    <span class="sc3"><span class="re1"><m:GetStockPrice</span> <span class="re0">xmlns:m</span>=<span class="st0">"http://www.example.org/stock"</span><span class="re2">></span></span>
      <span class="sc3"><span class="re1"><m:StockName<span class="re2">></span></span></span>IBM<span class="sc3"><span class="re1"></m:StockName<span class="re2">></span></span></span>
    <span class="sc3"><span class="re1"></m:GetStockPrice<span class="re2">></span></span></span>
  <span class="sc3"><span class="re1"></soap:Body<span class="re2">></span></span></span>
<span class="sc3"><span class="re1"></soap:Envelope<span class="re2">></span></span></span></pre>


I’m trying to keep it as low and understandable as possible.

Now, how to create this SOAP documents?

Well, in fact there is an easy way for HOME to create the SOAP document for sending it to DATA_SERVER.

A part from SOAP, we have WSDL (Web Services Description Language). WSDL is a XML document that describes all the instruction that a machine/program can accept and how to call them. Basically WSDL is a document to tell HOME what kind of instructions to send to DATA_STORAGE inside the SOAP document. At the same time, it is standard and it’s usage can be automated, so you can have a program in HOME computer that analizes data from internet sources. If a new source appears it will have it’s own WSDL document telling you how to access it’s data using a SOAP document. You simply download that WSDL document and asks your computer to generate the code.

Ok ok, sounds nice… AN EXAMPLE PLEASE!!!!

MATLAB Real-Time currency converter

The description I just made is a little bit simple, but should be enough to understand how this things work. In the end, there is a set of nodes, paths, intermediates… that read the SOAP documents, interpret them and redirects everything.

Let’s just say that one source of WSDL documents to access several different data bases is WebserviceX.NET


In this web page you can have WSDL documents for many data bases. The one we are interested now is the currency exchange one


The currency exchange WSDL will tell our program (in this case MATLAB) how to ask for the actual currency exchange. When we click on the link… We acces a webpage with the WSDL link.

webservicex currency

Step 1: Just copy the link to the WSDL document to use it in MATLAB: http://www.webservicex.net/CurrencyConvertor.asmx?WSDL

Now go to MATLAB. The documentation that will help us is Access Web Services Using MATLAB SOAP Functions.

Step 2: Use the WSDL file to create a MATLAB class.


It will create a folder for the class on your working directory (it will be named @CurrencyConvertor).

Step 3: Create an object of the new class (the creation name of the class will be the name of the directory, which will have a m-file inside with the same name, the rest of m-files will be procedures to be used with the object ).


I cannot explain here what a class in MATLAB means, but suppose you want to create vectors and tell MATLAB how to work with a vector. You will define the class vector which has fields, properties etc… in the simple case, just a set of numbers. Then appart from that definitions, you define procedures to work with vectors. One procedure will be for instance how to add 2 vectors (for instance adding together the set of numbers).

Step 4: If you go into the created class directory, you will find 3 m-files, if you open them, one if to create the class, one to display the information of the WSDL and the other one called ConversionRate is to request the conversion rates. It’s just a function to be used with the object we just created “x”. And the description tells us how to use it. For instance, retrieving the conversion rate between USA Dollar and United Kingdom Pound:

ConversionRateResult = ConversionRate(x,'USD','GBP')

And the answer is 0.5948 is it correct? According to Google, it is:

google currency exchange


MATLAB Stock Exchange Quote  

This one will tell us how to get real data from the stock exchange.

I think many people will find this one extremely interesting.

Step 1: Get the WSDL file from WebserviceX

stock exchange

stock exchange2

Step 2: As before create a class:


Step 3: Create an object of that class


and use it to get the stock value of… “IBM” (of course, you need to look for the names of the stock values, and one good place to do that is Yahoo Finances)

yahoo finance

Step 4: Get the stock value of IBM. As before, checking the class folder we see that GetQuote.m is the file to get the data.


This time the answer is a string with text and values: <StockQuotes><Stock><Symbol>IBM</Symbol><Last>186.37</Last><Date>6/6/2014</Date><Time>4:01pm</Time><Change>+0.39</Change><Open>186.47</Open><High>187.65</High><Low>185.90</Low><Volume>3296900</Volume><MktCap>188.6B</MktCap><PreviousClose>185.98</PreviousClose><PercentageChange>+0.21%</PercentageChange><AnnRange>172.19 – 206.98</AnnRange><Earns>14.626</Earns><P-E>12.72</P-E><Name>International Bus</Name></Stock></StockQuotes>
This string can be scanned very easily to get only the desired data.

For the last example I’m going to use a different web. It’s called Xmethods and it’s a repository for WSDL links.

logo_smallThe last example is…

MATLAB to compare images in internet

Basically this is going to take 2 urls to 2 images and compare them and give a number. The closer to zero the closer are the images.

Step 1: Get the WSDL which is: http://www.quisque.com/fr/techno/eqimage/eqimage.asmx?WSDL

Step 2: Create the class and one object of the class.


Step 3: Compare this 3 images:

A (url: http://thegoodride.org/wp-content/uploads/2014/04/goonies_map_2010_a_l.jpg)


B (url: http://www.blogcdn.com/blog.moviefone.com/media/2010/06/gooniesmainscenes.jpg)

gooniesmainscenesC (url: http://media0.giphy.com/media/ftvw5gPRJzdcc/200_s.gif)


&nAnd the code is




which gives us the results…
To be honest…. I think it is more usefull in terms of knowing if the image is exactly the same, but not so good for comparison.

Hope you like this short introduction.







What? Want more?




What I showed you was very nice… but It could be even better. Think about it commercially. Ignite is the answer.


Ignite is the same as WebserviceX.NET but commercial, that means that you will have access to many more data bases and information. Energy demand, currency income into companies, news headlines… but of course you have to pay to get that information. You need to use a login in your SOAP files.



Paperscape is a tool to visualize papers in Arxiv (Arxiv is a free open archive for scientific papers and preprints), but not the information inside the papers, the relation between papers. Each paper is represented by a circle, and the size of the circle represent how many citations it has. To group together the papers they use an algorithm based on the common citations between papers.

So basically, is a tool to see what is going on in physics and what are the important things in your area.


Ok, let’s give it a try.

I’m going to look for Neural Network,s my old topic (hope to go back soon).


Nice, they lay close to quantitative biology.

And now my new topic, magnetic domain walls.


Not bad! Pretty close actually! And  computer science is in between!

Hope you like it.

Neko Time!!! =^_^= matthen

Last week I spend some days in Cambridge performing some experiments at the university,l and as always happens when I go to Cambridge, I learn something new.

This time I came across Matt Henderson blog.


Matt has a mathematics degree by Cambridge University, and he is now working on his PhD on statistical dialogue systems.

He is an unstoppable explorer. His blog is full of experiments and nice mathematical simulations. And here I want to show you which are the ones I like the most. Who knows, maybe a collaboration between us could be possible in the future.

So, here they are.



Basically, if you have particles moving randomly and they are able to become added to a seed, then these random patterns appear. They are close related to chemical reactions and electrical transport. Nice post, with code, and a link to.. Agregation images by Andy Lomas.



Gingerbreadman is a chaotic map. Basically, you select random points in the plane and using very simple equations, you transform the points into new ones. If you repeat it enough times, a figure appears that looks like a Gingerbreadman. And I like this one because I also explore it myself. Remember this?

Iterated Function Systems.


Iterated function systems is a technique to build fractals using transformations of points. It’s similar to the Gingerbreadman map, but with a set of equations that alternate randomly. And I also explore it! Remember the 100 posts post?


Double pendulum.


This was the post that bring me to the blog. The double pendulum is an example of a quite simple chaotic system, it’s only two pendulums linked. In the image on top we can see 2 double pendulums, what the animation want to show is that quite similar initial conditions can evolve into very different evolutions. (I’m working in a nice post about this, but I’m not telling anything more now).



This is an applet to play with iterated functions systems. This one uses the geometrical approach for defining the functions used for performing the iteration. I like it, is quite good. Unfortunately, it’s difficult to repeat successfully patterns.

Create GIF animations with Mathematica.

I don’t like Mathematica, I prefer Matlab or Python, but… who knows, this could be useful.

Animated Optical Illusions.


I saw this effect long ago in a book. I like it. I never had enough time to make anything. But here you can see how it works.

Designing Galleries.


In this post what he wants to show is the importance of designing of buildings. Basically, a good design can help to build a museum where you can visit exactly once each room without crossing with other visitors. Or… if it is a mall, how to design it to make people walk several times into the same point (increasing the showing of that particular shop).

Soap film holes.

The film doesn’t belongs to the blog, but is so amazing…

Shepard scale.

I like this one, is my first sound illusion. Basically, you feel like the scale is getting higher, but it is not.

A Tautochrone (or Brachistochrone if you focus on other property) is a curve where no matter you put a ball on it, it always takes the same time to get to the botom point. I saw many times the Brachistochrone and never realize that it also has this property. I can think of quite funny experiments now for it.


f[x_] := Print[StringJoin[x, FromCharacterCode[{91, 34}],x,FromCharacterCode[{34, 93}]]];
f[“f[x_]:=Print[StringJoin[x,FromCharacterCode[{91, 34}],x,FromCharacterCode[{34, 93}]]];f”]

A quine is a piece of code which is able to print itself. I heard about it before, but it’s the first time I saw one for Mathematica.

And thats all. If you want more, visit his blog. Hope you like it!


Yesterday I discovered a web page that makes real one of my visions for the future. One day neural networks will perform tasks that are impossible nowadays for actual algorithms. Until that, why not use internet and the power of thousands of humans to perform that tasks? Well, It has been done, and the name of it is Zooinverse.


On this web page you can access to many different projects and help them. The main task is classification. Human brain capability to identify patterns and classify them has not yet been achieved by computers, so that is what this web page ask the users, help in classification tasks.
And at the same time, is like a small game, so it’s funny to help.

Those are the projects involved.

  • Classification of galaxies from Hubble images. It seems an easy task, but Hubble has been taking photos for a long time and the better the clasiffication the better the stellar population profile can be achieved, and that can help to study the evolution of the universe. For more information about Hubble, please visit the HubbleSite.


  • Classification of craters on the Moon. Check images to distinguish between craters and mounds. To know more about the Lunar Reconaisance Orbiter, you can visit it’s NASA webpage here.


  • Tracking solar storms to it’s origin. This is quite funny, because before any classification tasks, we are trained to do it. This job will help to predict the evolution of solarm storms, which are a big danger to earth and specially to satellites and electronic devices. I can say that this one is the most difficult so far. Helping in this project is no kids game, each survey has to be done with time tracking, front and ahead stereo vision… For more information about the STERO mission, visit the wikipedia article.


  • Look for planets. Yeah, little green men, are you there? Kepler mission measures the brightness of a star. That measure is not constant, there is many reasons for the amount of light from a star to be variable. One of them is that the star has a planet near by. Basically, each time the planet moves between the star and the earth, the brightness of the star decreases. This is very difficult to spot because of the noise in the measure and the variability on the data. It’s almost impossible to fit the brightness to a smooth function and look for spikes. For more information about Keppler Spacecraft visit it’s web page.


  • The next one is to identify different materials in galaxies using infra-red images. This is the link and below is a video tutorial. In this case, identifying materials and it’s evolution through time will help to evolve simulation models. That represents and improvement for universe evolution prediction, but also for other kinds of simulations, because there is many fundamental interactions happening in the evolution of a galaxy, and understanding them can help to improve simulations of more common things on earth (like predicting the temperature of formation of some chemical compound for the chemical industry). Please, visit Spitzer telescope web page to learn more about the tool that is making this possible.

  • The Red Planet! The HiRISE project is dedicated to analyse data from a camera on board the Mars Reconnaissance Orbiter. The main task of this program is to identify dust formations on Mars. That will help to trak sand storms and distinguish between permanent features and weather effects. A part form that, the HiRISE web page has many interesting photos from the earth, don’t forget to visit!


  • Next step, Speed Factor 12 Mr Data. We are going to help to develop Warp Drive. Well, not exactly, that is what they say in the project, the truth is that we are looking for massive dark objects. According to relativity, the energy of an object can affect the space-time in it’s surroundings, that means that is we have a very massive object in the path of a light ray, the light ray will be deflected. The more mass has the object, the biggest the deflection. In this project they look for that kind of deflection. How it looks like? If you are looking at very far away galaxies, a massive object between you and them will deflect the light and create an optical   illusion. A ring, a pair of object, an arc of light… Detecting them is very difficult, but can help to detect massive object that are invisible to us, yes, black holes.


  • I think this next project is one of the things I always asked my self about. How is it possible that sometimes studies refer to climate in eras where computer and data logging was not common? One of the answers is indirect measurements, like CO2 concentration in polar ice… but this one… this project consists in help to digitalize handwrite data form old vessels in other to track their travels and the climate changes they faced. I think they idea is nice, but the tasks is massively boring…


  • Classifying Cyclones can help to prevent them or discover a trend in the data. Are they increasing? Are they becoming more stronger? What can be learned from them?


  • On the next project we use the Oxyehynchus Papyri collection, and our task is to identify letters and text on that documents. Is a very hard task. I think this one is only interesting to history or arqueological students. To read more about the collection, visit the wikipedia article.


  • I feel like in Star Trek, mission Save the Earth. The scope of the next project is to identify whale sounds. Looks amazing, but basically, you listen to one sound and try to match it with another sounds.
  • galaxySince my the first time I read 20,000 leagues under the sea I wanted to visit the bottom of the ocean. With this project you can feel how it looks like. Your task is to identify components in sea floor pictures, like kind of soil, fish, sea stars… And with that, help them to classify different kinds of floor. Remember that 2/3 of the planet is ocean, there is more sea floor than normal floor. Imagine what we can expect!galaxy
  • Now we want to monitor the population of bats. In this project we have a series of sounds recorded in different places and we need to identify the sources of sound in the record. Quite simple and nice. And at the same time, it challenges us to listen more carefully at our environment next time we hear something.


  • Camera traps on the Serengeti!!! I think I like this the most, because the other ones I have a little insigth into them because of my formation, but this one is like diving into David Attenborough documentaries, and at the same time, they are real images took remotely.


  • This one is very similar to the archaeological ones. There is lots of data from museums that has been stored in paper format and needs to be transcribed. This is a project to transcribe data from plants and insects. It’s not quite interesting, sorry.


  • Help with cancer. That is a good project. Cancer is one of the worst illness of all times. Now you can help to fight against it. Here we need to classify cell samples and identify different parts on them. This project can help to improve detection techniques and accelerate diagnosis of cancer, preventing them to develop and save many lives.


  • The latest ones is the most disfgusting. I appolige for that. You need to tarck worms and click “Z” whenever they lay an egg. Puag!galaxy


Hope you enjoy the experiments!