What about a Community Leadership Summit in Italy?

I’ve been working with Italian tech communities very closely since years now, and I’m convinced that time has came for a new step forward: running the first Italian Community Leadership Summit.

What

An occasions to connect community leaders, organizers and managers that are interested in growing and empowering a strong community. To debate around the present situation, visions and tools, sharing the teaching and learning adventure, agreed on common initiatives to make the community more capable and effective. It’s a protected moment where managers can leave the leadership roles and be the perfect community members they’ve had in mind, to work together in refining the art of community management and make it better understood and shared by us all. Nothing new, although.

Why

I feel the Italian context has now the right level of maturity for an event of this kind: we already have established networks of communities, like GrUSP, GDGs, DotNet etc, acting as meta-communities for the organisers and working together country-wide(ish), but generally only the same type of groups are inside these networks, and the latter are disconnected each other. Not because of a particular devil will, but only because “We’ve never had the occasion to try“.

It has happened that, during events like Codemotion or XXDays or others, the community managers gathered, had fun together, complained about common issues, shared experiences and contacts and even worked on new ideas together. Having a dedicated moment, not just an occasion during another event, can potentially bring a huge value to all the attendees.

Take for example Milan, where the community density (together with a good collaboration spirit leaded by Daniele) allowed to create Milano Tech Scene. Big enough to offer cross-pollination occasions and small enough to still preserve an old, good, community mindset. Just to compare, the density in cities like Paris, London or Berlin is so high that this kind of initiatives wouldn’t be possibile, and Meetup rules :) It’s time to scale this mindset to the whole country.

Finally, I trust contamination as one of the key to evolve in the dev ecosystem, and so the community scene should reflect this approach. Shared mindset, practices and skills on how to organize a successful event, contacts, next steps about the evolution of the community ecosystem and much more can only help that evolution, and having the feeling of the existence of such network can help newcomers to grow quickly and experienced managers to refine their art.

Why I’m proposing that?

Simple: be the change you want to see. I believe in the usefulness of the format and in this moment as the right moment. Following a lean approach, the Summit is the MVP I want to validate. After an initial phase where I gathered ideas, impressions and thoughts, it’s time to test.

Then, I’ve another objective in mind: diversity, in particular gender gap. Until we simply complain about low female presence at tech events, considering ourselves out of the equation because “We’re open to opposite gender participation”, we’re not really doing a lot to improve the situation. All of us should add our own contribution for the cause, no matter what we can add. Just do it. So, my first contribution in pushing everyone acting, is to open registration to one person only if he/she can bring (along) another person of the opposite gender. Could be an organiser of the same community or a different one, I don’t mind until we don’t cheat, like involving partners that haven’t never taken part to the community life. Aiming for a balanced participation. Damn difficult, but moonshots are here for that.

When and Where

Probably around November or December, before is too difficult and I don’t want to wait too much time. In Milan or somewhere else, proposals are more than welcome!

So, now?

I’ve already in mind something about the agenda, but probably I’ll discuss in another post. Right now, I really want to know what you think about this idea. Lean methodology says that is impossible to create something for the users if the users aren’t involved (also) in the problem-definition phase. Not a problem this time, but an opportunity. So, YOUR comments and constructive criticisms are more than welcome.

Identify your Twitter followings older that 4 months

Spring is all about cleaning, the saying goes, so why don’t apply the same principle also the the accounts I follow on Twitter? Why? Because I would like to maintain their number under 400 and because I would like to grow my very limited Python skills.

With the help of TweetPony (among the many), the task was pretty straightforward. Final result is a simple script that checks for the people I follow, verifies their last tweet date and alert me if it is older than four months.

Configure the Python environment (Ubuntu 14.04 Trusty)

I don’t want to pollute my system-wide Python installation with libraries and dependencies related to a single project, so I created a virtual environment. Still not a master on that, so forgive my errors:

apt-get install python-pip
sudo pip install virtualenv
cd %projectdir%
virtualenv build_dir
source build_dir/bin/activate

From now ongoing, all the pip commands will be execute inside the (build_dir) virtualdev, and not at system-wide level. Time to install the TweetPony library:

sudo pip install tweetpony

Once installed, I tried some examples from the GitHub repo, to check if it worked. And yes, it did (even without api key and permission, see later), but a boring console message appeared every time the script made a call to Twitter API, caused probably by the old Python 2.7.6 version or libs I was using:

InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning

In order to solve it, I installed some dev libraries required to compile some other Python libraries (again, inside the virtualenv only)

sudo apt-get install libssl-dev
sudo apt-get install libfii-dev
pip install cryptography
pip install pyopenssl ndg-httpsclient pyasn1
pip install urllib3

and added these lines of code at the beginning of the main function of the script, before any Twitter API call:

import urllib3.contrib.pyopenssl
urllib3.contrib.pyopenssl.inject_into_urllib3()

They made the trick! But, as I said, probably you may not need all of these.

The script

The script itself it’s pretty simple. I took the basic code to create the TweetPony API object from the repo’s example folder and I was able to get user’s friends_id (the account the user follows). Then, cycling thru each one, I checked the status of that friend, watching for last tweet date. Some cornercases management (like private tweets or no tweets at all) and voila’, I had all I needed.

Regarding authentication, all Twitter’s libraries require a consumer key and consumer secret to work, in addition to an OAuth access_token and access_token_secret. What made me preferred TweetPony to other libs, like tweepy or python-twitter, was that TweetPony doesn’t required anything. Test consumer key and secret are gently embedded into the lib source, while OAuth tokens are created on the fly for you and persisted over a file, .auth_data.json. To use new credentials, simply delete the file and add somewhere, at the beginning of your code, these two lines, with key and secret obtained from Twitter Dev Console:

tweetpony.CONSUMER_KEY = 'xxxx'
tweetpony.CONSUMER_SECRET = 'xxxxx'

Final consideration about Twitter API usage: there is a limit of 180 calls every 15 minutes, so I added a sleep after every check. Slow, but it worked with my 500+ followers :)
Continue reading

Converge Hackathon: developers + designers + diversity. Is it even possible?

One of the cool aspect of my current job is the freedom I have to experiment with what I think it’s valuable and important for the developer ecosystem. This time I tried to tackle two aspects, both under the diversity umbrella: expertise mix and gender gap.

In collaboration with frog design (thanks Laura and Alex for the help), we envisioned a platform to experiment and iterate around these topics, so we create the “Converge Hackathon” format. Let’s analyse the main idea and the first implementation, held at Google HQ in Milan, March 7th.

First, why an hackathon?

We all know what an hackathon is: a fixed amount of time for experimenting with new things, get in touch with smart people and have fun with passions. In addition, “Converge Hackathon” aims to improve the collaboration between designers and developers during the whole process of thinking, refining and realizing an idea. Hence the name. And because I viscerally love the hackathon format ;)

20150307 - Converge Hackathon 03

Don’t be shy and… present!

How the collaboration between developers and designers has gone?

Pretty much well, I would say.  This collaboration was one of the more acknowledged strength of the event. Here some of the attendees’ comments:
“Was challenging to work with stranger but at the same time interesting and funny. The best part was the division of the work”
“The collaboration was really good. It was my first time working with developers and I enjoyed a lot. Otherwise, I think it was needed a bit more of integration regarding with how the design and the coding could be merge”
“I’ve meet a lot of interesting people and different points of view on even the simplest thing”
“Good organization, very nice the initiative of mixing designers with developers and give an opportunity to work together”
Although it was challenging:
“I’m a designer. Speaking with Developer is very difficult because they only think in their square area.”
“At the beginning was difficult to know new people and get in touch with the developers”
To summarise: no pain, no gain when you start this kind of collaboration :) But the feedback showed that audience gained a lot, despite some small pain.
We balanced the attendees considering 2/3 of developers and 1/3 of designers, and frog carefully selected the latter viewing their portfolio, their profile, their activities. They wanted to be sure that the right profiles were part of the crowd. For developers, I let them in without any particular control. I trust in natural selection ;)
Another learning point was about the teams creation: such different crowd requires a focused pre-work for mixing the people in a proper way, something that goes beyond the quick ice-breakers we did in the morning, that work generally well in a standard hackathon. Dedicate the right attention to this aspect is crucial.
One final consideration is about the timing: one day only event makes hard to create something meaningful, and the ideation phase, that generally is very short during a normal hackathon because the attendees are eager to “get their hands dirty with code”, this time was fostered, and mostly led, by designers. The result was that final hacks were more elaborated that the average I’ve generally seen, but with the drawback of having prototypes less “working” than the usual. As note for us, organisers, next time we need to keep the ideation process inside a given timeframe, otherwise the risk is that, once the first half of the event has gone, teams are still thinking about what they can realise.

20150307 - Converge Hackathon 01

Diversity? Really not an issue for this team

Continue reading

The Marshmallow Challenge: icebreaker and lessons teacher

The Marshmallow ChallengeI’ve found an interesting game that can be used both as icebreaker and for teaching a fundamental lesson about the importance of prototyping before fully committing a project (sounds lean? Oh yes, it is!). It’s called the Marshmallow Challenge and can be run by groups of 4, there is no age constrain and requires less than 20 minutes.

Each group has 20 spaghetti, 1 meter of tape, 1 piece of string and 1 Marshmallow. The challenge is to build with them, within 18 minutes range, a self-sustaining structure with the Marshmallow on top of it. The winner is the group that achieve the maximum height between the Marshmallow and the table.

Seems fun, and I think it is, and there are some important lessons that emerge from the game: more info in this TED 2006 talk and in a more recent one. But for me, both bring to the same conclusion: prototyping and a good team move ideas to success ;)

I’ll start adopting this icebreaker in my community meetings, and see what will happen. Sounds cool ;)

Raspberry Pi, RPi Camera and Roomba: a first-person experience of the housecleaning

Raspberry and RoombaToday’s challenge

Have a first-person view of the Roomba cleaning, using a RasperryPi, a RPi Camera and some additional stuff.

 

Raspberry basic configuration

Hardware side, I used the most standard available components: a Raspberry Pi model B, the RPi Camera module and a Edimax EW-7811Un 150Mbps Wi-Fi USB card.

Regarding the OS, there are a lot of distributions available for the Raspberry Pi, but I went for a plan Raspbian: wide support, flexibility, de-facto standard. I installed it using the NOOBS setup, following detailed instruction to load NOOBS image on a sd-card under by Ubuntu pc. Then, started the RasPi, selected Raspbian from the distro installation menu, waited a little bit for the installation to complete.
In the raspi-config app, I enabled the camera and enabled SSH server. Then the usual sudo apt-get update && sudo apt-get -y dist-upgrade && sudo rpi-update && sudo reboot combo, to update everything. The RasPi is ready to rock :)

 

WiFi setup

Configure the WiFi is easy with the right card (this is an evergreen truth in Linux world). Because the Edimax is supported natively, the only thing I did was:

sudo nano /etc/network/interfaces

and added these lines at the end of file (change them for a different WiFi encryption model). Alternative steps are available too.

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid "YOUR_NETWORK_SSID"
wpa-psk "YOUR_WIFI_PASSWORD"

Reboot and voilà, now the ethernet cable can be unplugged and the RasPi made another step toward movement freedom.

 

Camera software and configuration

There are different options available. Out-of-the-box, Raspian can both capture still images from camera or recording videos using raspistill and raspivid commands, respectively.

Another option is to use motion as backend to expose the camera video stream. The only problem is that stock version doesn’t (still) support RasPi camera device file, so a big thanks to dozencrows for the fixes has as done. There is a detailed forum post with the final files to replace on the RasPi, plus a couple of tutorials explaining detailed steps to have a working setup, included the sudo apt-get install libjpeg62 part.
With a working motion installation, different frontends can be used to see the camera stream, like the motion server itself, motionEye etc, and the RasPi can even be seen as a normal IP camera from lot of software, included Synology Surveillance Station.
Too lazy to do everything by hand? A complete distro, MotionPie, is an out-of-the-box solution in active development. Flashing the image on a sd-card and reboot the RaspPi is the only requirement, wifi configuration apart.

I found also another project, RaspberrIPCam, a fork of raspivid that offers a working website for accessing the camera stream, Synology Surveillance Station integration and more. Step-by-step guide here.

But, personally and because of the challenge, I chose another way: the project RPi Cam Web Interface.

 

RPi Cam Web Interface

This solution works in a pretty clever way: instead of using motion infrastructure, it relies on the raspimjpeg command, that is able to capture single frames from the camera stream and its configuration can be changed on the fly writing commands to a Unix pipe. Then motion is configured to point to raspimjpeg output, so it can do all its motion detection magic without a real access to the camera. And Apache serves a micro-site where camera stream is shown and some php scripts glue all together and provide additional features like an easy access to captured video files, changing camera parameters, controlling the entire RasPi from the web interface (included a restart / reboot) and much more.

To install RPi Cam Web Interface, I execute

cd
git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git
cd RPi_Cam_Web_Interface
chmod u+x RPi_Cam_Web_Interface_Installer.sh
./RPi_Cam_Web_Interface_Installer.sh

and started the whole app (only the first time), using

./RPi_Cam_Web_Interface_Installer.sh start

The install script allows to customize the Apache directory where all the files are stored changing the rpicamdir variable value in the script itself. Doing so, it’s possible to avoid conflicts with other apps that serve additional pages and sites on the same RasPi. Additional info are available reading the bash file. Use the source, Luke!

To access the website, opening http://raspi_address/ from any browser is enough.

To disable the red light from the camera module, I run

sudo nano /boot/config.txt

and added the following line at the end of file

disable_camera_led=1

 

Roomba setup

Thank to the WiFi, the RasPi could freely connect to the network, and an USB power bank provided the necessary power for the experiment. Scotch tape to assemble everything and here the final result. Pretty cool, isn’t it?

(tinyCam Monitor is responsible for casting the camera stream on the TV, thanks to a Chromecast)

 

Notes on RPi_Cam_Web_Interface

Some salient parts of RPi_Cam_Web_Interface architecture, mainly to help my memory over time.

motion is configured to read images from local server, using netcam_url http://localhost/cam_pic.php, and this php script returns the content of /dev/shm/mjpeg/cam.jpg file, where raspimjpeg writes the camera preview thanks to the setting preview_path /dev/shm/mjpeg/cam.jpg.
When motion detects something, it executes the command on_event_start echo ‘ca 1′ > /var/www/FIFO, and the corresponding on_event_end echo ‘ca 0′ > /var/www/FIFO when the event ends. /var/www/FIFO is the Unix pipe file used to control raspimjpeg  via the control_file /var/www/FIFO option. Doing so, raspimjpeg creates the video file returned in the web page with all the captured files.
All motion web-service options and recording capabilities are switched off in the config file.

When installing, a link to the camera preview file is created under Apache site directory using the command sudo ln -sf /run/shm/mjpeg/cam.jpg /var/www/$rpicamdir/cam.jpg. In this way, http://raspberry_ip/cam.jpg returns the latest image from the camera, and Android app like tinyCam monitor can points to this address to show camera stream, image by image.

To create Python app that uses the camera, picamera interface can be used.

 

 

Environmental variables, API key and secret, BuildConfig and Android Studio

You wanna create an Android app that uses Twittet APIs, so yo need an API key and an API secrets only you and your apps know. Because you need these values inside you app, it’s easy and quick to write them down directly in your source code. But when you commit this code to GitHub (or any other public repo), practically you’re telling your secret keys to entire world. Seems uncommon? Unfortunately, not so much! Same for Dropbox SDK.

One simple way to avoid this bad practice is to store your values inside an environmental variable, so only your machine knows it, then read this values in some way and inject them in your code at build time.
Let’s see how to do that using Android Studio, Gradle, and BuildConfig.

First, we need to create these environmental vars. In Linux and Mac, create or edit the file ~/.gradle/gradle.properties (pay attention to the actual Gradle User Home directory position) and add some values:

#define your secret values
TwitterConsumerKeyProp=xxxxxx66666666634333333ddddddTwitterConsumerSecretProp=3nkl3sds3skmslSDF394asdk39dmasd

Second, in your module’s build.gradle file, add these lines

apply plugin: 'com.android.application'
 
//Add these lines
def TWITTER_CONSUMER_KEY = '"' + TwitterConsumerKeyProp + '"' ?: '"Define Twitter Consumer key"';def TWITTER_CONSUMER_SECRET = '"' + TwitterConsumerSecretProp + '"' ?: '"Define Twitter Consumer secret"';
 
android.buildTypes.each { type ->
    type.buildConfigField 'String', 'TWITTER_CONSUMER_KEY', TWITTER_CONSUMER_KEY    type.buildConfigField 'String', 'TWITTER_CONSUMER_SECRET', TWITTER_CONSUMER_SECRET
}

Please note the “TwitterConsumerKeyProp” and “TwitterConsumerSecretProp” have to be the same in both Gradle settings file and Gradle build file.
Finally, to use these values in your code, filled at runtime by Gradle in the build script for you, simply use:

ConfigurationBuilder cb = new ConfigurationBuilder()
    .setDebugEnabled(BuildConfig.DEBUG)
    .setApplicationOnlyAuthEnabled(true)
    .setOAuthConsumerKey(BuildConfig.TWITTER_CONSUMER_KEY)    .setOAuthConsumerSecret(BuildConfig.TWITTER_CONSUMER_SECRET);

That’s all, then it’s up to you how to create more elaborated configurations. For example, you can have different values based on different android.buildTypes types, or the gradle settings file in a common network folder used by the entire team or…

My Synology DS214play setup

Synology DS214 PlayA NAS is a wonderful beast: you can use it to create your private cloud to store, backup and share your files, as a DLNA media server for smart TVs, as a download station for a variety of different contents and much, much more. True, you can configure your RasPI to do the same job, but appliance solutions are just… Ready to use! In the hard task to pick-up one for my house, my choice, eventually, has gone to Synology DS214play.

Why? Because Synology is a very well known manufacturer of corporate NAS solutions and the software used in the high end products is the same of consumer segment of their offer. Because they have a huge number of clients available for practically all OSes, desktop and mobile, to sync files, watch videos, listen to music, download files, acts as a web, mail, file, git server, IP cams monitoring and much, much more. Because they are easily customizable and there is a vibrant community (even in Italian) of professional and domestic users behind these products. And because the DS214play has an Intel Atom CPU, can transcode 1080p video on-the-flight, has 2 bays for HDs and has an affordable price.

 

Cloud store, and file sharing

First of all, the NAS can be used as Cloud Station to create a private cloud storage, while the dedicated app  keep in sync your data across different machines (no matter of the OS). Once the router and a dynamic DNS service are configured, data is kept in sync also outside the LAN, so bye bye to Dropbox, Google Drive and any other public cloud services.
In alternative, it is possible to setup a classic rsync flow between a Linux/Mac and the NAS. I used the latter, and now I backup my entire desktop HD to the NAS, and the NAS can then backup its entire content (or part of it) on a cloud backup storage, like S3, Azure and others. Steps to setup rsync, ssh keys and all the whole process are explained here. If ssh connects to the NAS but rsync doesn’t work, Network Backup has to be enabled  via Menu -> Backup & Replication -> Network backup destination -> Enable Network Backup.

 

Extending the software

SynoCommunityAdditional software could be added using a well-know packages-repositories mechanism. The official repository has some app, but the real magic starts when the SynoCommunity repo is added to the system. Git Server, Horde, ownCloud, CouchPotato, Python, Memcached, Mercurial, Headphones, SABnzbd and many more are all at a click distance now. And they run directly on the NAS, with a lower power consumption compared to a dedicated pc.
It is also possible to create new packages with the help of spksrc, a cross compilation framework intended to compile and package softwares for Synology NAS

 

Manage movies, TV series and video in general

First solution is to use DS Video, the dedicated client to manage the personal collections of movies, TV series and videos. It’s available via web interface, for mobile OSes and included in some Samsung TVs. ChromeCast is supported, so is possible to browse the movie catalog from web / smartphone /tablet and watch it directly to the TV, without connecting a single cable. Obviously the video format have to be compatible with Chromecast, but it’s always possible to re-encode the video in a supported format (download on a pc, convert, copy back to the NAS).

Plex Media ServerIf DS Video is not enough, Plex Media Server can be installed too, with all the goodies Plex offers :). Once the Plex package has been downloaded, a manual install in the Synology Package center is required. Before doing so, Settings -> General -> Trust Level -> Allow any publisher, otherwise the installation will fail with a “This package does not contain a digital signature” error. Plex server new versions has to be checked from time to time, because it isn’t automatically updated by the Synology Package center. Finally, Plex does not support the hardware transcoding features of the DS214play or DS415play, so up to 720p videos are transcoded, with stuttering and pausing during playback and 100% of CPU usage. But Plex interface is way better than DS Video to me, catalog recognizes a bigger number of videos, so I still prefer to use Plex for browsing and organizing my movies and anime collection.

 

Music and Photos

I’m not a big user of music (I’ve a Sonos system at home, all the rest is history now), but music works pretty much like movies, with a dedicated app plus some more available from the community. Photos too.

 

Download station

One of the area where Synology excels is the entire pipeline of scouting, grabbing, downloading, organizing and making available contents. For example, it’s possible to simply add a movie to the IMDb Watchlist and have it automatically downloaded at home, ready to be streamed to the TV via Chromecast. Or as soon as a followed TV series has a new episode aired, it is automagically downloaded on the NAS, with subtitles, ready to be viewed on XMBC, without any intervention. Amazing! :) Let’s see a potential setup for this. Please note that the following steps bring to illegal actions in different countries, so I highly discourage everyone in doing that.

One way for automagically having latest TV series episode without a single interaction, uses a RSS feed technique. On the search results page of Kickass and other services, it’s easy to find an RSS icon that points to an RSS feed with the results of the search. Every time the feed url is loaded, new results can be found inside it, if any.  A specialized service, like showRSS, can create more fine-tuned feeds, with a particular file quality only etc.
The default Download Station app is used to download BT/HTTP/FTP/NZB/Rss feed/eMule files, so the RSS feed url can be added to the app, checking “Automatically download all items” option. Voila’, it works :) A drawback of this approach is that non-useful downloaded files must be manually deleted and subtitles are not downloaded.
If Kickass-like services are banned in a particular country, generally changing the NAS or router DNS and use OpenDNS (208.67.220.220) or Google Public DNS (8.8.4.4), can workaround the limitation.

 

A more advance setup for the download station

Sick BeardSick Beard watches for new episodes of favorite TV shows and when they are posted, it downloads, sorts, renames, generates metadata and download subtitles, plus more. It is available as package in the SynoCommunity repo and, once installed (along with Python package), can be configured to use Download Station for downloading snatched files. SABnzbd can work too for nzb files (available in the SynoCommunity repo).
If Sick Beard is unable to communicate with Download Station directly, a Watched folder can be configured in Download Stations under Settings -> BT/HTTP/FTP/NZB -> Location, and SickBeard need to save torrent and nbz files in that folder, under Settings ->Search Settings -> NBZ or Torrent search -> Method: Black hole and selecting the same folder used before. Now Download Station will look for files in that folder, grab them as soon as they’re added by Sick Beard and start the download.
In the Settings, subtitles download can be enabled, while post-processing options are useful to rename and organize files and download metadata (for XMBC, for example) once Download Station has downloaded them.

couchpotatoSimilar approach for movies, this time using CouchPotato. Once installed from the SynoCommunity repo (Python package required too), Settings -> Downloaders -> Black hole directory to communicate directly with the Download Station. Then CouchPotato can be configured to follow IMDb or Movies.IO watchlist, Goodfilms user queue, Rottentomatoes, IMDb, Kinepolis charts and much more.
The potential issues with the standard version of CouchPotato is that it searches only on some newsgroup services, with a limited support for torrents. To solve that, is possible to install the CouchPotato Custom package instead (and Git and Python packages) from the SynoCommunity repo and, during the first launch, specify a github repo with a customized version of CouchPotato. For example, https://github.com/RuudBurger/CouchPotatoServer.git is also able to search on ThePirateBay and KickAssTorrent services. Change DNS as specified above if required.

SickRageIf ThePirateBay and KickAssTorrent are a need also for TV series, the same, “Custom” technique could be used with a fork of Sick Beard, called SickRage. Installation is pretty straightforward: first the Sick Beard Custom package (with Git and Python packages) from the SynoCommunity repo need to be installed and, during its first launch, https://github.com/echel0n/SickRage.git must be used as Fork URL and master as branch. That’s all, with more detailed instructions in the official forum. Configuration of SickRage follows the same settings of Sick Bears, but now there are a bunch of additional Search Providers to use ;)

 

The NAS is really a great piece of hardware, and software flexibility allows to use in a variety of different way (what about hosting your WordPress blog there?). Future exploration will bring, for sure, to additional setups (I’ve in mind something for IP cams, for example), so stay tuned!