The Music of the Algorithms

Thoughts on Computing, the Universe, and Everything

An Year in Research

Published on March 22, 2017. Research, Thoughts, IJCAI, ICAPS, Planning

The year that just fin­ished has been an im­por­tant turning point in my life, marking the start of my journey through the world of sci­en­tific re­search. A year of Ph.D. might seem a little span of time, but there is more to say and to think about than I would have imag­ined, so this is a little sum­mary and some af­ter­thoughts, a year later.

July, 24th 2015 was a hot and sunny summer day, per­fect to go to the beach or have a trip to the moun­tains. But that day I had other plans: I was grad­u­at­ing. After re­turning back from the six months pe­riod in Spain a year back, fin­ishing the last round of exams was hard and stress­ing, leaving less time and en­ergy than I wished to focus on the the­sis, but now I was ready to end the last part of my life at uni­ver­sity. Ex­cepting that, as I al­ready knew, it was not going to be the end of any­thing, but the start of some­thing new.

I have had in mind to do a Ph.D. for some years. The idea was born mainly be­cause of my in­terest in math­e­mat­ical as­pects of Com­puter Sci­ence, which has grown rel­a­tively late in my life. I wanted to learn more, and this was the only way to achieve my goal. My love for Sci­ence in gen­eral also played a role: I wanted to ac­tu­ally do some Sci­ence, even if in very little parts, not just learn about it. Then I was ac­cepted at the Ph.D. pro­gram in my own uni­ver­sity, and every­thing started to look real: I was going to start this ad­ven­ture! At the be­gin­ning I knew nothing about how re­search looked like, and I knew nothing about how my re­search had to look like. Now, a year later, I know nothing + ε, so I can try to share this ε with you.

So what are you doing ex­actly?

I’m sure ex­plaining one’s own re­search topic is a tough task for any­body, but it is more so for a Ph.D. stu­dent in com­puter sci­ence. Every­body knows what med­i­cine is for, or what en­gi­neers do, even if they might know nothing about the dis­ci­plines. But the­o­ret­ical com­puter sci­en­tists, well, they just don’t exist in peo­ple’s mind. I’m going to ex­plain in a few words what I did and what I’m going to do in the next two years, but I don’t know who you are. You might be my su­per­vi­sor, or my grand­mother, so I’ll try being clear enough without too much words. If you are read­ing, though, I as­sume you’re in­ter­ested, so if you don’t get some­thing, then do me one fa­vor: ask me to ex­plain.

Plan­ning my re­search

Being into my second year, I can al­ready tell you what my Ph.D. thesis will ap­prox­i­mately be about: a the­o­ret­ical study of log­ical ex­pres­sive­ness and com­pu­ta­tional com­plexity of Au­to­mated Plan­ning lan­guages and prob­lems. Plan­ning is a clas­sical topic in that part of AI re­search dealing with au­to­mated rea­soning (rather than learning, the other part of AI that is re­cently gaining all the glo­ries). The aim is to de­sign sys­tems able to reach a given goal, as in­de­pen­dently as pos­si­ble, given a de­scrip­tion, as gen­eral as pos­si­ble, of what they can pos­sibly do. If it seems too gen­eral to you, it’s be­cause it is. The problem is ex­actly that of being able to per­form tasks as di­verse as pos­sible with a rea­soning process as gen­eral as pos­si­ble. This gen­er­al­ity, and flex­i­bil­ity, means the problem in its gen­eral terms can be very hard: the eas­iest vari­ants are PSPACE-com­plete. De­spite the the­o­ret­ical com­plex­ity, plan­ning sys­tems have been very suc­cessful in prac­tice, being able to handle huge prob­lems. The tech­nology is used every­where, from man­aging air­ports to han­dling lo­gis­tics, from in­dus­trial ro­botics to man­age­ment space op­er­a­tions.

A planner typ­i­cally ex­pects as input a de­scrip­tion of its en­vi­ron­ment, and of how it can in­teract with it, i.e., what are the ad­mis­sible ac­tions that can be per­formed to op­erate in­side this en­vi­ron­ment. The second part of the input is the goal, i.e., what we want it to do. The output can be, de­pending on the prob­lem, a list of ac­tions to per­form to ob­tain the goal, or a more generic recipe that also ex­plains how to react to the en­vi­ron­ment, that often can be a priori un­pre­dictable. The input the planner ex­pects has, of course, to be ex­pressed in some syn­tax, a do­main de­scrip­tion lan­guage, and as it turns out, the choice of this lan­guage has pro­found ef­fects on the problem it­self. The lan­guage has to be flex­ible enough to allow you to ex­press all the de­tails that are needed to model your prob­lem, but con­strained enough for the problem to be solved in a rea­son­able amount of time. The trade-off be­tween ex­pres­sive­ness and com­pu­ta­tional com­plexity is a clas­sical topic in com­pu­ta­tional logic: every time you in­crease the ex­pres­sive­ness of your lan­guage you have to pay for it in terms of how dif­fi­cult it is to deal with it. This can go to the ex­tremes, since if you add too much, the problem may be­come too com­plex or even un­de­cid­able, while if you con­strain it too much, you won’t be able to model and solve re­al-­world prob­lems.

So here it comes my Ph.D. the­sis: trying to settle some ques­tions about ex­pres­sive­ness and com­pu­ta­tional com­plexity of a cer­tain class of plan­ning prob­lems known as time­line-based plan­ning prob­lems. Ba­si­cally, common plan­ning lan­guages reason about what a single ex­ecutor can do. Toggle that switch, lift that load, move that piece, switch on the en­gine, etc. They are quite im­per­a­tive in style, even if ex­pressed very de­clar­a­tively. In con­trast, time­line-based plan­ners reason about the whole system as a set of com­po­nents, mod­el­ling them sep­a­rately, and then com­pose their be­havior with a set of tem­poral con­straints that de­scribe how the com­po­nents in­teract with each other. This par­a­digm is more com­po­si­tional, and al­lows us to model more easily do­mains where a lot of dif­ferent com­po­nents have to in­ter­act. More­over, time­lines can be flex­ible, meaning that they can rep­re­sent un­cer­tainty in the be­havior of your com­po­nents, e.g., imagine that you know that you can go from \(A\) to \(B\), but you cannot know pre­cisely how much time it will re­quire. You can also model un­con­trol­lable com­po­nents: parts of the system that are not under your con­trol, maybe be­cause part of the en­vi­ron­ment it­self. All these fea­tures are very con­ve­nient in highly un­pre­dictable en­vi­ron­ments, and it is not a co­in­ci­dence that time­line-based plan­ners have been suc­cess­fully used in prac­tice for easy tasks like space mis­sions.

So what can I do about it? De­spite a lot of suc­cess in prac­tice, the the­o­ret­ical side of the equa­tion is less de­vel­oped than for more com­mon, ac­tion-based lan­guages. Little is known, for time­line-based plan­ning, on the the­o­ret­ical is­sues I told you above. So I spent my first year trying to un­der­stand the frame­work and then iden­tify some key ques­tions that still need to be an­swered: what is the com­pu­ta­tional com­plexity of the problem in its gen­er­al­ity? How much ex­pres­sive is it re­ally, com­pared with other ap­proaches? In other words, is this only syn­tactic sugar or there re­ally is more under the hood (tl;dr: yes, there is)? What is the log­ical coun­ter­part of the prob­lem? That is, we know clas­sical plan­ning is equiv­a­lent to sat­is­fi­a­bility checking of Linear Tem­poral Logic for­mu­las: what is the logic that cap­tures time­lines in­stead? Po­ten­tially in­ter­esting ques­tions are many, and I doubt to be able to finish every­thing in the next two years. The first re­sults are two pa­pers, one just ac­cepted to ICAPS, that show that a very con­strained ver­sion of the problem is al­ready very hard, i.e. EX­P­SPACE-com­plete, and that a cer­tain class of ac­tion-based plan­ning lan­guages can be ex­pressed through time­line-based prob­lems in a com­pact way (and the con­verse is not true).

How did I come to work on this stuff? Who knows me knows that I al­ways loved the the­o­ret­ical as­pects of Com­puter Sci­ence, but that I didn’t have these topics in mind until very re­cently, so what hap­pened? As un­likely as it may seem, every­thing started in Spain. During my Erasmus pe­riod in Va­len­cia, in 2014, I took some courses about Ar­ti­fi­cial In­tel­li­gence, specif­i­cally Neural Net­works and Ma­chine Learn­ing. It was a re­ally cool topic, and when I re­turned to Udine I tried to link it to every­thing I was do­ing. So as it often hap­pens, by chance, I asked the right ques­tion to the right peo­ple. In that pe­riod, prof. Mark Reynolds from the UWA was vis­iting Udine to give a course about Tem­poral Log­ics, as part of the Au­to­matic Sys­tems Ver­i­fi­ca­tion course held by prof. An­gelo Mon­ta­nari. I al­ways liked a lot logics and au­tomata, and I was en­joying the course very much, but that year in my head was the year of AI, so I asked what, in hind­sight, must have felt like a very naive ques­tion: are there any links be­tween these topics and AI? Of course there are! I was pointed to a paper by, among oth­ers, Marta Cialdea Mayer, from the Uni­ver­sity of Roma Tre, and An­drea Or­lan­dini, from the italian Na­tional Re­search Coun­cil, which showed the en­coding of clas­sical plan­ning into LTL that I men­tioned be­fore. This was the starting point for my master the­sis, of which Marta and An­drea were co-­su­per­vi­sors. This ex­pe­ri­ence turned into a fruitful col­lab­o­ra­tion that con­tinues today with our joint work on time­line-based plan­ning – we worked to­gether on both the pa­pers above – which is one of their core ex­per­tise area.

The 2014 visit of Mark Reynolds was also the op­por­tu­nity to know about one of his re­cent works, on a new tableau-based pro­ce­dure for checking the sat­is­fi­a­bility of Linear Tem­poral Logic for­mu­las. It was a new way of solving a problem that was al­ready been studied for a long time, which seemed very promis­ing. A col­league of mine, Matteo Bertello, as the exam for that course, worked on im­ple­menting it in C++, cre­ating a tool he called Leviathan, and he spent little less than a year doing an amazing job op­ti­mizing it for per­for­mance. By when he fin­ished, I was just starting my Ph.D., and we man­aged to pub­lish the re­sults in a joint paper where I han­dled the ex­per­i­mental eval­u­a­tion of the per­for­mance of the tool and the writing of the ac­tual man­u­script. That was ba­si­cally my first pub­lished paper (I had a pre­vious pub­li­ca­tion, but without being in­volved in writ­ing), and con­sid­ering I had been there for only three months, it was a very good start. Pub­lishing it also im­plied a very nice side ef­fect, that I will tell you later. Even if tableau-based sys­tems for LTL do not be­long to my main re­search topic, it’s a very in­ter­esting one. Re­cently, a year later, Mark Reynolds re­turned vis­iting us again, and we man­aged to submit a second pa­per, which has just been ac­cepted to LPAR-21, and to col­lect ma­te­rial for at least two more. Starting as a side pro­ject, the work on this topic has be­come a rel­e­vant part of my re­search ac­tiv­ity.

Side projects without side ef­fects

Talking about side pro­jects, you know I cannot reason on only one thing at the time, so I’ve been busy doing a lot of things in the mean­time, fact that also caused some trou­bles. Last year I went to Meeting C++ to give a talk about Func­tional Pro­gram­ming in C++. I know, I promised a fol­low-up post a year and a half ago. Please be pa­tient. Still talking about C++, a men­tion goes to our C++ User Group Udine, which is now three years old. We had a meeting in De­cember and it was great, with nearly forty people and great guests giving talks. Somehow I man­aged to find time to handle the lo­gis­tics of that meet­ing, and I’m quite proud of our group.

Still talking about func­tional pro­gram­ming, at the be­gin­ning of the year I was also in­volved in a side project that al­lowed me to learn a lot about the in­ter­nals of GHC, the Glasgow Haskell Com­piler. The project in­volved the STM li­brary, a pop­ular con­cur­rency li­brary that im­ple­ments Soft­ware Trans­ac­tional Memory in Haskell, and re­sulted into the bach­elor thesis of a tal­ented friend of mine, Valentino Pi­cotti, also an ac­tive member of the C++ User Group. What is the STM? Using this li­brary, you can run con­cur­rent code in­side trans­ac­tions with guar­an­tees that re­semble those pro­vided by data­base sys­tems: atom­icity, con­sis­tency, iso­la­tion, and dura­bility (com­monly called ACID guar­an­tees). In prac­tice, this means that in­stead of having to lock some syn­chro­niza­tion prim­i­tive be­fore ac­cessing a shared vari­able, you can just en­close your mu­tating code in­side a atomically block, and let the run­time do the dirty job:

1
2
atomically $ do
  modify (+1) var

Here, the atom­icity guar­an­tees that the code in­side the do block is ex­e­cuted ei­ther en­tirely or not at all, while iso­la­tion en­sures that no other con­cur­rent trans­ac­tion can see the ef­fects of this code until it has com­mitted in turn. This is re­ally con­ve­nient as you can write con­cur­rent code in a very nat­ural way without wor­rying about locks or race con­di­tions. A col­league of mine, Marco Per­es­sotti, worked on a gen­er­al­iza­tion of this par­a­digm, which al­lows the use of open trans­ac­tions. This means that, drop­ping the iso­la­tion guar­an­tee, the ex­e­cu­tion of a trans­ac­tion may be in­flu­enced by mu­ta­tions to vari­ables done by other com­mitted trans­ac­tions. This ba­si­cally al­lows two con­cur­rent trans­ac­tions to com­mu­ni­cate, thus avoiding the use of IO ac­tions to co­or­di­nate the op­er­a­tions of dif­ferent trans­ac­tions.

Marco did all the the­o­ret­ical work, for­mally defining the se­man­tics of the new sys­tem. The pro­ject, then, con­sisted in the im­ple­men­ta­tion of this idea. The first step in this di­rec­tion was to un­der­stand in de­tails how the STM worked, in order to borrow as many ideas as pos­sible without rein­venting the wheel. How­ever, it was not fea­sible to hack on GHC di­rectly without a lot of ex­pe­ri­ence, so we de­cided to im­ple­ment a sep­a­rate li­brary, that would have been in­evitably slower but easier to de­sign and suit­able to be a re­al­istic proof of con­cept. I helped in the de­sign phase, and Valentino did a great job with the im­ple­men­ta­tion, which had to be written in C be­cause of the low level in­ter­ac­tion with the Haskell run­time. The li­brary still have to be pol­ished, and we plan to fi­nalize the work sooner or later, maybe for some fu­ture Haskell Sym­po­sium. In the mean­time, the in­cred­ible amount of things learned in the process was well worth the ef­fort.

A whole new world

One of the best ben­e­fits of being part of the aca­d­emic world, es­pe­cially in the Com­puter Sci­ence com­mu­nity, is the great amount of op­por­tu­ni­ties to travel around the world. In fact, last year, from June to Sep­tem­ber, I’ve been at home for maybe no more than three weeks in to­tal.

Sci­ence and The City

The paper on the im­ple­men­ta­tion of Reynolds’ tableau was pub­lished at IJCAI, the In­ter­na­tional Joint Con­fer­ence on Ar­ti­fi­cial In­tel­li­gence, which in 2016 was held in New York City. This is one of the biggest con­fer­ences in this area and par­tic­i­pating to it was an ex­citing ex­pe­ri­ence, which was also very useful as an op­por­tu­nity to meet and dis­cuss about our re­search with a lot of other peo­ple. The poster ses­sion, in par­tic­u­lar, was ex­tremely pro­duc­tive. But as pro­fes­sional as I may have been, I cannot hide that the biggest fea­ture of the con­fer­ence was its lo­ca­tion. This was my first trip out­side Eu­rope, and can be re­garded as one of the most in­tense and in­ter­esting trips of my life. New York City is simply as­ton­ishing. Manhattan at Sunset The fa­mous jungle of sky­scrapers is fa­miliar to anyone who has seen at least a couple of Hol­ly­wood movies, but the feeling of walking through Times Square or in Cen­tral Park is that of being lit­er­ally in­side those movies. The urban land­scape of NYC simply feels unique, sim­ilar to a nat­ural land­scape shaped by a nat­ural phe­nom­enon of some kind, as if the clouds were de­signed to­gether with the sky­scrapers to re­flect on the glass-­made walls of other sky­scrap­ers. But it is at sunset that New York gives its best. The city has been de­lib­er­ately built on a straight grid: you have Av­enues going North/​­South, and Streets going East­/​West, per­pen­dic­u­larly all through the Man­hattan Is­land. This means that staying in the middle of a Street you can see all through the other side of the is­land, and at sunset this com­bines with the sun set­ting near the horizon in a uniquely beau­tiful scene. When I took the pic­ture you see here (click to see the en­tire album on Flikr), I were one among at least other ten people taking pic­tures of the same land­scape, one even with a full tripod set up in the middle of the ze­bras. The cul­tural land­scape seemed at least as in­ter­esting as the urban one, from the typ­ical Broadway mu­si­cals to the many and im­por­tant art mu­seums (that, for­tu­nately, I man­aged to vis­it). Un­for­tu­nately, ten days busy at con­fer­ences are not enough to live the city in all its edges, but are suf­fi­cient to get a taste of it.

Of course, such a com­plex city also has a price: the rhythm is re­ally chaotic, the traffic is ter­rible (air qual­ity? what’s that?), and the cost of living is ab­surdly high. I paid $139 per night for an ugly two star bed & break­fast (only bad with no break­fast, ac­tu­al­ly). I can’t re­ally imagine living there for an ex­tended pe­riod (un­less I were rich enough to get a Cen­tral Park-­facing at­tic), but I can def­i­nitely un­der­stand what makes New Yorkers so proud of their city. They feel to be part of some­thing great.

School’s out for summer

I hope Alice Cooper won’t take it per­son­ally, but some­times one has to go to school during sum­mer, too. In­deed, no survey of my summer jour­neys would be com­plete without men­tioning ESSLLI 2016, the 28th Eu­ro­pean Summer School in Logic, Lan­guage and In­for­ma­tion: two in­tense weeks of courses on trans­versal dis­ci­plines ranging from com­pu­ta­tional logic, to nat­ural lan­guage se­man­tics, to phi­los­o­phy. Landscapes around Bolzano Lo­cated in the won­derful dolomitic land­scapes of Bolzano, this summer school was a re­ally for­ma­tive ex­pe­ri­ence. The courses were in­ter­esting and held by in­ter­na­tion­ally rec­og­nized ex­perts in their fields, but among all of them, one was en­light­en­ing: So­cial Choice Theory for Lo­gi­cians. Eric Pacuit is a very tal­ented lec­turer, and, most im­por­tantly, his lec­tures con­stantly trans­mitted his pas­sion for what he does. I’ve had heard of Ar­row’s the­orem1 be­fore, but I was barely aware of the ex­is­tence of So­cial Choice Theory as a field, be­fore at­tending this course. Af­ter, I wished to switch my re­search topic. Well, maybe I’m ex­ag­ger­ating a bit, but he def­i­nitely got every­body’s at­ten­tion.

Be­sides the in­ter­esting courses, the most im­por­tant at­trac­tive of ESSLLI is prob­ably the cross-fer­til­iza­tion be­tween dif­ferent dis­ci­plines. Among the stu­dents at­tending were com­puter sci­en­tists, lo­gi­cians, lin­guists, en­gi­neers, and even philoso­phers, coming from all around the world, from China to Uruguay. In this way, I’ve learnt things that I would have not come to know oth­er­wise. Did you know many lin­guists com­monly use lambda cal­culus to mod­eling nat­ural lan­guage se­man­tics? Nei­ther did I.

TIME flies

In Oc­tober I went to Copen­hagen to present our first paper on the ex­pres­sive­ness of time­line-based plan­ning prob­lems at TIME 2016, the 23rd In­ter­na­tional Sym­po­sium on Tem­poral Rep­re­sen­ta­tion and Rea­soning. TIME is a small but well-­known con­fer­ence which aims to let ex­perts of very dif­ferent dis­ci­plines meet each other, under a common de­nom­i­na­tor, which is tem­poral rea­soning. The con­fer­ence is in­ter­ested in tem­poral rea­soning mo­ti­vated by or ap­plied to a broad va­riety of fields, ranging from AI to logic, to formal ver­i­fi­ca­tion, to data­bases. It fits per­fectly the kind of work that I’m do­ing, and it was the per­fect venue where to start pre­senting my re­search.

But this con­fer­ence in par­tic­ular had an im­por­tant emo­tional meaning to me. In 2014, when I was still a master stu­dent, I at­tended TIME 2014, held in Verona, in­vited by prof. Mon­ta­nari, who spon­sored the trip. That was a very trou­bling pe­riod for me, after re­turning from Spain, having to catch up with the ex­ams, to work at the the­sis, and, most im­por­tantly, to chose what to do next. TIME 2014 were the first time I at­tended an aca­d­emic con­fer­ence, and was a very en­light­ening ex­pe­ri­ence. Two years later, I were at TIME 2016, pre­senting the first little steps of my own re­search, feeling like I was closing the circle in some way. Time flies, and this feeling re­mem­bered me why mine was spent well. It was a very little step, all con­sid­ered, but it was worth all the ef­fort. A thing is sure: I would not have been able to do any­thing without the sup­port of many spe­cial peo­ple: my fam­ily, my friends, but es­pe­cially my girl­friend: to stand any bat­tle, even the smallest one, we all need someone to fight for.

Next round

While I was starting writing this post, I thought to have just a few thing to say, just to re­alize later how many things can be done in a single year. For in­stance, I didn’t even men­tion other two con­fer­ences and an­other summer school that I at­tended, but this post al­ready grew too long. Maybe, in­stead of waiting an en­tire year to find time to write up some­thing, I will be able to come to this blog more of­ten, as orig­i­nally in­tended. Maybe ;)


  1. Sadly, the von Neu­mann Theory Prize and Na­tional Medal of Sci­ence winner Ken­neth Arrow passed away less than two months ago.