Skip to main content

Pimaton v0.0.4, introducing web features!

Introduction

So as you know, I'm building a photomaton app for my sister in law's wedding. I've released the v0.0.3 not long ago and I think it's time for the v0.0.4 to be released.

The goal of this version is to answer specific needs: What I want for the evening, is for the photo to be synchronized online so everybody at the evening can go and watch the picture and download them if needed.

I didn't want to duplicate the wheel here so I look for a simple photo web gallery application. My choice went to sigal, a static gallery generator, so I could do a simple script to automate the whole on the remote server.

Also, to simplify the process, I want to have a QR code displayed on the waiting screen (only for GUI) to link to the web gallery url.

Last thing I needed, is the ability after each "photoshoot" to have a link for the people who took the pictures to go to a web gallery with only the pictures taken of the group. So you can scan the QR code and go download the specific pictures.

So that's why the v0.0.4 I'm adding web capabilities: - Synchronize picture to a remote server (DONE) - Display a QR code on waiting screen (DONE) - Put a QR code on the generated file that link to each "run" of pimaton (DONE).

I've also decided to open a "pimaton" section on this blog (see menu at the top of the page) to create a full tutorial on how to create your photobooth from the hardware/setup to the creation of the box, with Pimaton installation and configuration in the middle.

It's still very early draft and needs a lot of work but the setup of the gallery is the first "good" page of this new area of the blog. Hope it will help any of you (or me when i'll resetup everything for the D day) put your photobooth together :).

Pimaton is now installable via pip!

Yesterday was a small step for me entering the world of python packaging :)

As you've read before I've decided to create a photobooth application for raspberrypi for a wedding in August. Last time, I posted some pictures of the pimaton in action during an evening with a friend.

Since the code seems stable and usable (all MVP features are done but one: the GPIO button), I decided to follow my beloved "release early, release often" principle and create a pip package so Pimaton can be installed easily on a raspberrypi.

I won't spend time on how I created the package, as there is plenty of documentation about it online already, and the process is pretty straight forward :). The important thing is that it is now possible to boot a raspbian on a pi and install pimaton in 2 minutes. The package page is here

Click on the image below to see a quick gif of the GUI (NOTA: The camera stream is not shown here as PiCamera handle it on its own above X, check the video to see it fully. And the print option is disabled): ![Pimaton UI gif]({{ "/assets/pimaton_gui_thumbnail.gif" | absolute-url }})

You can see it working on quick video here: QUICK DEMO

Installation

Now pimaton can be installed as simply as:

# Install dependencies:
sudo apt install python-pip libjpeg-dev
pip install -U pip
# To enable GUI:
sudo apt install python-tk python-pil.imagetk
pip install pimaton

Then, duplicate the default config file to start modifying it:

cp /usr/local/lib/python2.7/dist-packages/pimaton/assets/default_config.yaml /path/to/myconfig.yaml

And then edit the configuration file. Read carefully the comment before editing everything :) If you have X installed, I strongly suggest using the GUI for now by changing the ui_mode to gui:).

Then you can start running pimaton:

pimaton --config-file=/path/to/myconfig.yaml
pimaton --config-file=/path/to/myconfig.yaml --debug # verbose mode.

What can it do so far:

  • Take pictures and generate a final picture with thumbnails of the taken picture (by default 6 pictures that will be displayed on 2 rows and 3 columns)
  • Print that picture if a printer is plugged in
  • Very configurable:
  • TUI (very basic for now) and GUI mode
  • Multiple start input (keyboard, touchscreen and GPIO soon)
  • A template file (image with decoration to be the base of rendered image) can be used
    • A empty template with placeholder can be generated to help the template image creation
  • Picture size (taken by picamera), thumbnails (printed pictures), number and disposition of final rendering
  • Time between steps / loops
  • All texts
  • Print is an option, number of copies can be configured.
  • All picamera settings can be overridden.

More information can be found on the README.

supporting foss #01

As said in my 1st post of 2018, I decided to "Give Back More" to open source this year and select 3 projects each month to do a donation to them, to thanks these project that helped me in my professional or personal life since my migration to open source software only almost 15 years ago.

As said also, I already have a monthly donation to La quadrature du net and I am a member of the Drupal community too (paid annual fee) but I wanted to do a bit more.

I have in mind these categories and subcategories: - Tools - CLI - GUI (I don't use much of them to be honest :) - Self Hosting - Server related - Application related - OS and core software

Each month, I'll pick one for each main category and try to alternate subcategories for Tools and Self hosting.

I don't think these blog post needs to be long, so here we go:

  • Vim: no word needed here, except maybe that is has been my main text editor and IDE for the last decade, so I owe it so much :]
  • Letsencrypt: Because they made secure self hosting so simple that and it was desperately needed.
  • Raspbian: Having 8 RaspberryPi made this choice almost obvious :)

Find all my "Supporting FOSS" posts

Yesterday, Pimaton v0.0.3 (alpha) was put in used!

So you might remember that I'm working on a photoboot application for raspberrypi, as detailed here.

Yesterday, I had friends over at my place, so I thought I might do a very small run of the application to actually test something closer to reality than my "developer tests".

I know that running a program over time might have impact, software (memory limit, bugs and crash, ...) or hardware (printer issue, connectivity, ...) so I thought do a first trial.

We didn't used it a lot (only 3 times over 3 hours, taking 18 pictures and printing 6 pages (2 copies each)).

The bad news is that we didn't try it intensively as we should have (but wedding is still very far away^^), but the good news is that it works almost flawlessly. At least we could start taking pictures whenever we want and the printer behaved correctly all night :).

Obviously, I found some small issues in the UI (like step 7/3 ^^) and missing element that I'll fix before tagging the MVP v0.0.3 (as well as some UI improvment as it is very ugly at the moment).

It should arrive soon, even though I'll probably won't be able to work on it before end of next week, as I'm travelling for work and don't want to take the pi with me (Or I might look into the possibility of emulating the pi and picamera and gpio on a virtual machine.)

![Pimaton Dry Run pic]({{ "/assets/pimaton_dryrun_thumbnail.jpg" | absolute-url }})

You can see above an example of the rendering (blank template so no text or icons printed with the thumbnails), don't worry, the picture are not blurry, I just don't like pictures of me on the internet :).

I'll update soon when v0.0.3 is stable and official :)

If you want to try the exact version of the code I ran yesterday, look at this specific commit

For more information, you can read the README that should always be up to date with the latest code version/

Introducing my new project: Pimaton a Photobooth app for raspberry pi

Hello everyone!

Introduction

As said in my previous post, I offered for my help to setup a photo booth for the wedding of my sister in law. I proposed this as I've seen numerous project on reddit or elsewhere about DIY methods based on raspberry pi.

For the few readers of this blog, you must know my love for the Pi (I think now I have 1 pi2, 1pi0W and 6 pi3…) so I thought it would be great to not only do geeky stuff on them (code, hosting, …) but something that non geeky people will also enjoy using, with a bit of hardware (creating the box) too.

I look again on the internet and saw a a lot of great project like: - This one by Jake Barker - Boothy - drumminhands - Rpi photobooth - …

But while these are great project and good inspiration, it didn't do what I exactly wanted or I didn't like how it was done… So like a nerd, I starting hacking away to create a small PoV… PoV that will soon be published as 0.0.1 and opensourced (Note 0.0.1 won't be a fully usable project yet, but I like the principle of release early release often^^).

This post is about introducing and describing the project, other posts will come soon with the evolution of the code and the construction of the box as this is really work in progress so far.

The idea of my photo booth

Requirements:

  • A simple box with a screen that tell people to get pictures
  • A big hardware button that people wouldn't miss even tired (hum hum) at 4am to start the process
  • Pressing the button will start taking several pictures
  • A UI that tell me when the pictures will be taken with countdown
  • Print the taken pictures all in image like a photo booth (X time).
  • Use a "template" system (base image) to personalized the background/texts of the printed pictures (eg: adding the names and the date at the bottom of the page or anything)
  • Save all the picture for the newly wed
  • Configurable system to be reused easily.
  • Installable via pip for simplicity
  • Maybe: Add a flash / lights in case of dark place

And if I have internet on the Pi, I also want to - Sync pictures on a webserver (outside of the pi for performance reasons) - Generate QR code to give a link so that people could go download the picture for themselves too (linked to above) - Ask to receive pictures via email - Maybe: A "end of party" action, where it zip all pictures and send a downloadable link to everyone.

And if using a touchscreen: - Add a button to see a slideshow of the taken pictures - Add an option for taking picture with or witout flash (if flash installed)

If possible, translatable too (I'm coding everything in english but I'd like to have it setup in French for this wedding)

Where I am ?

I've done this very small roadmap: - v0.0.1: Core features: - Taking X pictures and displaying them on a single compiled image (with a template image) - Configurable via a yaml file - Taken and pictures pictures share a unique key to retrieve the full group of pics easily later on. - v0.0.2: Printing capabilities + hardware button input - v0.0.3: UI implementations (Implement both option for CLI on GUI) - v0.0.4: Web capabilities (if I decide to go that way) - v0.0.5: Optional stuff

System will be fully usable starting v0.0.3.

As of now, I have done all the v0.0.1 items, including the main architecture of the app, with a configuration system and the core features around pictures. System will take X (configurable) pictures, will create thumbnails (size configurable). Then it will load the configured template if any (based image that might contains decorations) and paste the X image on as needed (depends on the number of pictures and the number of rows/columns configured).

So that's working pretty well :).

I've started working on the v0.0.2 this week end, now the system is able to print the generated image. I now want to add the hardware button as a configurable option (meaning you can either do it via the button or keep it via keyboard. on v0.0.3 it will also allow to choose GUI).

The hardware ?

I think I'm going to do a more complete post on that when I'll talk about building the box but as of now, I have:

  • a Raspberry Pi with raspbian installed.
  • a PiCamera
  • a 7" touchscreen and its box
  • a Canon selphy CP 1200 to print the picture (connected via USB, not WiFi)
  • a big arcade push button (not connected yet)

WHERE IS THE FREAKING CODE ??

Well, it's not open yet, but will be soon, probably this week end. I will wait for the 0.0.3 to advertise more for it but the source code will be accessible to the couple of people reading these pages :).

That's it for now, but stay tune for news soon :)

2018 is here

Hello everyone!

A very quick post to say this blog isn't dead yet, and I wish you all a happy new year and all best for 2018!

Because I don't like taking new year resolutions (let's face it, I almost never keep them anyway :p) I decided to challenge myself instead (Yeah I stole the idea from my friend juliuss ^^).

My challenge this year is to get back more to open source. Of course I already write a bit of opensource code (specially for kalliope latelly) but I don't think that's enough. My job is based on opensource and I've worked for more than 7 in opensource now. I think it's time to give back a bit more now that I can by giving small donations every month (at least as long as I still can! --touching wood--) to my favorite or most used opensource softwares. It might be big or very small project, based on daily usage or on an impulse :). I have a monthly donation for la quadrature du net (helps in the "fight" against the devil GAFAM) in place but I think I can do a bit more.

My challenge is to write here every month about the few project I'm giving a bit of money. The beautiful thing about only using opensource for almost 15 years now (wow) is that I have a load of big and small project I love and could donate to.

So that's challenge #1, starting before the end this month!

Challenge #2: Create a photobooth ("photomaton" in French) application and hardware/box for my sister in law's wedding in August.

So apparently renting that kind of thing costs a lot and can not be delivered everywhere. So I said "Challenge accepted"! I've seen a bunch of similar project on the internet but none where as complete (or cleanly implemented) like I wished… I also wanted to improve my python knowledge by doing more than just some PR for Kalliope… So my idea is to create a simple and configurable tool installable via pip on a raspberry pi that will be able to take picture via the picamera, generate a rendering like in a photobooth (multiple pics on 1 photo), and a lot more. I'll write dedicated blog postS on it (challenge #2.5).

Challenge #3: Improve my domotic and voice commands setup

One of my key focus is to improve and finish my home automation setup, based obvisouly on Kalliope and Domoticz and I'll try to write more about it too :) As the few blog posts show, this has already been a key topic for me in 2017 and should still be in 2018.

Well I think that these are already big for this blog with regards to the small number of posts last year, so I'll stop here :)

Again, best wishes for 2018!

Kalliope as your personal DJ

A quick blog post about how I managed my music via Kalliope :)

The basic idea is simple: I want to ask Kalliope to play music for me, and manage spotify playlist/search and radio

To do this, I need: - Kalliope installed - Kalliope MPD module installed - Mopidy server (to act like a MPD server and manage spotify and radio source)

I wanted to have a simple flow:

Me: "I want to listen to music"
Kalliope: "Of course, what do you want to listen to?"
Me:
  Option 1:
    Me: "My favorite spotify playlist"
    Kalliope: "Ok, starting your favorite playlist"

  Option 2:
    Me: "spotify playlist <Name Of a playlist>"
    Kalliope: "Ok, launching <Name of a playlist>"

  Option 3:
    Me: "I want to listen to the radio"
    Kalliope: Which radio sir?
    Me: "<radio name>"
    Kalliope: "launching radio <radio name>"

  Option 4:
    Me: "music from <artist name>"
    Kalliope: "searching for music of <artist name>"

  Option 5:
    Me: "I'll try some fashion music"
    Kalliope: "Starting some fashion music"

Installation

Kalliope

First, install Kalliope. For this, follow the documentation. Once it's done and working, install the MPD neuron as indicated on the README.

We'll get to the brain configuration in the next step.

Mopidy server

Then, you need to install mopidy. Either on the same device as Kalliope or on another one. For this, same as above, follow the documentation :)

If you have the same use cases as me, and want to plug spotify and the radio, you will also need to install mopidy-spotify, tunein, and spotify_tunigo and of corse the core mpd backend.

Mon fichier de conf .config/mopidy/mopidy.conf pour exemple:

[mpd]
enabled = true
hostname = 127.0.0.1
port = 8080
password = MyPassword
max_connections = 10
connection_timeout = 60
zeroconf = Mopidy MPD server on $hostname
command_blacklist = listall,listallinfo
default_playlist_scheme = m3u

[core]
cache_dir = $XDG_CACHE_DIR/mopidy
config_dir = $XDG_CONFIG_DIR/mopidy
data_dir = $XDG_DATA_DIR/mopidy
max_tracklist_length = 10000
restore_state = false

[logging]
color = true
console_format = %(levelname)-8s %(message)s
debug_format = %(levelname)-8s %(asctime)s [%(process)d:%(threadName)s] %(name)s\n  %(message)s
debug_file = mopidy.log
config_file =

[spotify]
username = MyLogin
password = MyPassword
enabled = true
bitrate = 320
timeout = 100
client_id = MyClientId
client_secret = MyClientSecret

[spotify_tunigo]
enabled: true

[tunein]
timeout = 5000

PS: You need to adapt the hostname if your mopidy server is not installed on the same machine as Kalliope!, see mopidy hostname configuration here.

Configuring the brain

The first question

Let's create a new brain file called mpd.yml in your brains directory, and configure the first step of the flow:

  - name: "ask-and-play-music"
    signals:
      - order: "I want to listen to music"
    neurons:
      - say:
          message: "Of course, what do you want to listen to?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "play-favorite-spotify-playlist"
              answers:
                - "My favorite spotify playlist"
            - synapse: "play-asked-spotify-playlist"
              answers:
                - "spotify playlist {{query}}"
            - synapse: "play-fashion-music"
              answers:
                - "I'll try some fashion music"
            - synapse: "play-asked-radio"
              answers:
                - "I want to listen to the radio"
            - synapse: "play-asked-music"
              answers:
                - "search for {{query}}"
          default: "didnt-understand"

So here we have the first step for our full flow. Kalliope is asking what kind of music I want to listen to, and depending on my answer, different synapse will be triggered.

Now I need to define the actions for each of these choices.

Managing options and anwsers

Playing my favorite playlist

That's easy, just like said in the doc of the neuron:

  - name: "play-favorite-spotify-playlist"
    signals:
      - order: "start my favorite spotify playlist"
    neurons:
      - kalliopempd:
          mpd_action: "playlist"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "1"
          mpd_volume: "50"
          query: "{{favorite_playlist_name}}"
      - say:
          message: "Ok, starting your favorite playlist"

I could give a "no_order" but I might want to fire this playlist directly so I gave a correct order.

Please note that I'm using the brackets because I'm using the variables capability of kalliope

Playing a playlist
  - name: "play-asked-spotify-playlist"
    signals:
      - order: "start playlist {{query}}"
    neurons:
      - kalliopempd:
          mpd_action: "playlist"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "1"
          mpd_volume: "50"
          query: "{{query}}"
      - say:
          message: "Ok, starting the playlist {{query}}"
Asking and playing a radio

First, lets configure kalliope so it ask which radio I want to listen to:

  - name: "play-asked-radio"
    signals:
      - order: "I want to listen to the radio"
    neurons:
      - say:
          message: "Which radio sir?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "play-radio"
              answers:
                - "{{query}}"
          default: "didnt-understand"

Then, create the brain that will actually start the radio:

  - name: "play-radio"
    signals:
      - order: "play-radio-no-order"
    neurons:
      - kalliopempd:
          mpd_action: "search"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "0"
          query: "{{query}}"
      - say:
          message: "Launching radio {{query}}"

Please note that I'm using the brackets because I'm using the variables capability of kalliope

Asking and playing any music

Same as the favorite playlist:

  - name: "play-asked-music"
    signals:
      - order: "search music {{query}}"
    neurons:
      - kalliopempd:
          mpd_action: "search"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "0"
          query: "{{query}}"
      - say:
          message: "Launching some music of {{query}}"

Please note that I'm using the brackets because I'm using the variables capability of kalliope

Didn't understand synapse

This is just to let me know if something went wrong and Kalliope didn't understand:

  - name: "didnt-understand"
    signals:
      - order: "didnt-understand"
    neurons:
      - say:
          message: "I'm terribly sorry sir, but something went wrong…"

Don't forget to add your new brain file in the brain.yml via an include and your variable in a loaded file :)

The full brain file:

  - name: "ask-and-play-music"
    signals:
      - order: "I want to listen to music"
    neurons:
      - say:
          message: "Of course, what do you want to listen to?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "play-favorite-spotify-playlist"
              answers:
                - "My favorite spotify playlist"
            - synapse: "play-asked-spotify-playlist"
              answers:
                - "spotify playlist {{query}}"
            - synapse: "play-fashion-music"
              answers:
                - "I'll try some fashion music"
            - synapse: "play-asked-radio"
              answers:
                - "I want to listen to the radio"
            - synapse: "play-asked-music"
              answers:
                - "search for {{query}}"
          default: "didnt-understand"

  - name: "play-favorite-spotify-playlist"
    signals:
      - order: "start my favorite spotify playlist"
    neurons:
      - kalliopempd:
          mpd_action: "playlist"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "1"
          mpd_volume: "50"
          query: "{{favorite_playlist_name}}"
      - say:
          message: "Ok, starting your favorite playlist"

  - name: "play-asked-spotify-playlist"
    signals:
      - order: "start playlist {{query}}"
    neurons:
      - kalliopempd:
          mpd_action: "playlist"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "1"
          mpd_volume: "50"
          query: "{{query}}"
      - say:
          message: "Ok, starting the playlist {{query}}"

  - name: "play-asked-radio"
    signals:
      - order: "I want to listen to the radio"
    neurons:
      - say:
          message: "Which radio sir?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "play-radio"
              answers:
                - "{{query}}"
          default: "didnt-understand"

  - name: "play-radio"
    signals:
      - order: "play-radio-no-order"
    neurons:
      - kalliopempd:
          mpd_action: "search"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "0"
          query: "{{query}}"
      - say:
          message: "Launching radio {{query}}"

  - name: "play-asked-music"
    signals:
      - order: "search music {{query}}"
    neurons:
      - kalliopempd:
          mpd_action: "search"
          mpd_url: "{{mpd_url}}"
          mpd_port: "{{mpd_port}}"
          mpd_password: "{{mpd_password}}"
          mpd_random: "0"
          query: "{{query}}"
      - say:
          message: "Launching some music of {{query}}"

  - name: "didnt-understand"
    signals:
      - order: "didnt-understand"
    neurons:
      - say:
          message: "I'm terribly sorry sir, but something went wrong…"

Kalliope reminder with snooze

Quick blog post to talk about how I setup Kalliope to remind me stuff, but with a snooze feature.

The default setup of the neurotimer let you ask Kalliope to remind you stuff after some time (minutes or hours).

I really like it because I have a poor short term memory, so when I have something to do in 30min, I use the kalliope neurotimer module to help me remember... But sometime, I want Kalliope to remind me again after 5 min because I didn't have the time to do at the time Kalliope reminded me.

Let's take a real life use case:

Let's say I am in a game (starcraft2 fan here ^^) and someone is calling me. When in a game, I can't (don't want to) pick up the phone, but I want to remember to call back the person. For this I'm using the neurotimer to do so, with a sentence like "Remind me in XX minutes to YYY".

This is the default setup explained in the neurotimer README file here.

But that is not enough for me, because I may be still in game when being reminded by Kalliope, so I want to have the ability to tell kalliope to remind me again in X minutes, without re-asking the full order like before.

The workflow I want:

Me: "Kalliope… remind me in 10 minutes to call back dad"
Kalliope: "I'll remind you in 10 minutes to call back dad"
[… 10minutes later …]
Kalliope: "You asked me to remind you call back dad"
Kalliope: "Do you want me to remind you again?"
  option1: (I want another reminder)
    Me: "Yes"
    Kalliope: "When?"
    Me: "In 5 minutes"
    [… 5 minutes later …]
    Kalliope: "You asked me to remind you to call back dad"

      *Option: Again, I'm in a sort of a loop here, so as long as I say "yes", it will remind me again until I say no (or anything other than yes).*

  Option2:
    Me: "No"
    Kalliope: "Ok, I won't remind you again."

So how does it work? I'm using the Neurotimer module, with the Neurotransmitter and the Kalliope memory to do so.

It does bring a limitation though, as you can't have multi reminder with "snooze" as the memory will only remember the last one.

My configuration to do so is the following:

Brain file:

.. code:: jinja

{% raw %}
---
  - name: "reminder-synapse"
    signals:
      - order: "remind me to {{ remember }} in {{ time }} minutes"
    neurons:
      - neurotimer:
          minutes: "{{ time }}"
          synapse: "reminder-todo"
          forwarded_parameters:
            remember: "{{ remember }}"
          kalliope_memory:
            reminder_2: "{{ remember }}"
      - say:
          message:
            - "Ok sir, reminder setup"

  - name: "reminder-todo"
    signals:
      - order: "reminder_todo_no_order"
    neurons:
      - say:
          message:
            - "Sir, you asked me to remind you to {{ remember }}"
      - say:
          message: "Do you want me to remind you again ?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "reminder2"
              answers:
                - "oui"
            - synapse: "no-response"
              answers:
                - "non"
          default: "no-response"

  - name: "reminder2"
    signals:
      - order: "reminder2-no-order"
    neurons:
      - say:
          message: "When do I have to remind you to {{ kalliope_memory['reminder_2'] }} ?"
      - neurotransmitter:
          from_answer_link:
            - synapse: "reminder-via-memory"
              answers:
                - "in {{time}} minutes"
            - synapse: "reminder-via-memory"
              answers:
                - "dans {{time}} minutes"
          default: "no-response"

  - name: "reminder-via-memory"
    signals:
      - order: "reminder-via-memory-no-order"
    neurons:
      - neurotimer:
          minutes: "{{ time }}"
          synapse: "reminder-todo"
          forwarded_parameters:
            remember: "{{ kalliope_memory['reminder_2'] }}"
      - say:
          message:
            - "Ok, I'll remind you in {{time}} minutes to {{ kalliope_memory['reminder_2'] }}"

{% endraw %}

Nota: The "no-response" is a default synapse I created that simply answer "ok sir" or something like this :).

And voilà :)

Headless RaspberryPi installation

Nota: This is more a quick note for myself to save somewhere how to do this more than a real blog post :)

The idea here is to install raspbian on a RaspberryPi without connecting the Pi to a screen and keyboard.

As I'm installing my new Pi (for a MagicMirror² purpose, link to Kalliope but more on this soon(ish) ^^)

Steps:

  • Install raspbian on a micro SD card (as explain on the internet [here] or [here] for example) (I like to simply run the dd command like this: dd bs=4M if=/home/bacardi55/Téléchargements/2017-09-07-raspbian-stretch-lite.img of=/dev/mmcblk0 conv=fsync and then sync)
  • Mount the SD card boot partition and create an empty file named ssh (touch /path/to/card/boot/ssh).
  • You can now unmount the boot partition and mount the root partition of the card
  • Go to /path/to/your/card/etc/wpa_supplicant/ and edit the wpa_supplicant.conf file to configure your wifi.
  • Add this at the end:
network={
  ssid="NameOfYourNetwork"
  psk="Your WIFI password"
}
  • You can also change the country at the top of this file too if your not in GB :)
  • Optional: If you want to loose time finding the IP of the new Pi on your network, you can edit the etc/hosts and etc/hostname files to give it a name. Normally, you should be able to ssh pi@theNameYouSet without knowing the IP :)
  • Save and quit, then unmount the sd card root partition
  • and voilà, connect to your pi via ssh pi@name (default is raspberry if you didn't change it and if you don't have other pi on your network) or via ssh pi@xxx.xxx.xxx.xxx :)

++

Kalliope updates, part 2

I've been using Kalliope again lately so I've had to work on my neurons and external app to make sure it still works on latest Kalliope version.

This quick block post is about the state of my Kalliope related code, the next one will be about my kalliope + kodi + domoticz setup so far :)

Neurons:

Neurons that are working (on README / samples files have been updated):

  • Domoticz: Manage switch or get value (light, temperature, ...) to Domoticz
  • Google Calendar: Get your meeting from Google Calender
  • Google Maps: Google Maps integration (ask about address, itinerary (car or tube), ...)
  • MPD: Manage MPD (and mopidy spotify) via Kalliope
  • Repeat: Ask kalliope to repeat stuff (i use it only via API)
  • TodoTxt: Manage todo lists via Kalliope
  • Uber: Ask about time to get a driver or price for a ride
  • World wild time: Ask time in any city in the world

Not working neurons:

  • List all orders: Will fix it soon(ish) ^^
  • PI Camera (no plan to work on it soon)
  • Web Scrapper: Get info from web page by scrapping web page (for page without API or RSS). My most urgent neuron to fix! :)

External tools:

Kalliope CLI

I've fixed the CLI app and improved it quite a lot. Now you can the list of orders and send any text orders, with autocompletion :)

This quick youtube video show some of these features (except autocompletion): KalliopeCLI demo video

Kalliope Remote

It is kinda half working / half broken at the moment. The app is still able to list all orders, and putting some of them in the ignore list. Each order that doesn't require a parameter can be fired by a click.

The full text order input still works too so you can send an order via text.

The audio input doesn't work anymore but i'll fix this soon.