# An Year in Research

Published on March 22, 2017.

The year that just fin­ished has been an im­por­tant turning point in my life, marking the start of my journey through the world of sci­en­tific re­search. A year of Ph.D. might seem a little span of time, but there is more to say and to think about than I would have imag­ined, so this is a little sum­mary and some af­ter­thoughts, a year later.

July, 24th 2015 was a hot and sunny summer day, per­fect to go to the beach or have a trip to the moun­tains. But that day I had other plans: I was grad­u­at­ing. After re­turning back from the six months pe­riod in Spain a year back, fin­ishing the last round of exams was hard and stress­ing, leaving less time and en­ergy than I wished to focus on the the­sis, but now I was ready to end the last part of my life at uni­ver­sity. Ex­cepting that, as I al­ready knew, it was not going to be the end of any­thing, but the start of some­thing new.

I have had in mind to do a Ph.D. for some years. The idea was born mainly be­cause of my in­terest in math­e­mat­ical as­pects of Com­puter Sci­ence, which has grown rel­a­tively late in my life. I wanted to learn more, and this was the only way to achieve my goal. My love for Sci­ence in gen­eral also played a role: I wanted to ac­tu­ally do some Sci­ence, even if in very little parts, not just learn about it. Then I was ac­cepted at the Ph.D. pro­gram in my own uni­ver­sity, and every­thing started to look real: I was going to start this ad­ven­ture! At the be­gin­ning I knew nothing about how re­search looked like, and I knew nothing about how my re­search had to look like. Now, a year later, I know nothing + ε, so I can try to share this ε with you.

## So what are you doing ex­actly?

I’m sure ex­plaining one’s own re­search topic is a tough task for any­body, but it is more so for a Ph.D. stu­dent in com­puter sci­ence. Every­body knows what med­i­cine is for, or what en­gi­neers do, even if they might know nothing about the dis­ci­plines. But the­o­ret­ical com­puter sci­en­tists, well, they just don’t exist in peo­ple’s mind. I’m going to ex­plain in a few words what I did and what I’m going to do in the next two years, but I don’t know who you are. You might be my su­per­vi­sor, or my grand­mother, so I’ll try being clear enough without too much words. If you are read­ing, though, I as­sume you’re in­ter­ested, so if you don’t get some­thing, then do me one fa­vor: ask me to ex­plain.

### Plan­ning my re­search

Being into my second year, I can al­ready tell you what my Ph.D. thesis will ap­prox­i­mately be about: a the­o­ret­ical study of log­ical ex­pres­sive­ness and com­pu­ta­tional com­plexity of Au­to­mated Plan­ning lan­guages and prob­lems. Plan­ning is a clas­sical topic in that part of AI re­search dealing with au­to­mated rea­soning (rather than learning, the other part of AI that is re­cently gaining all the glo­ries). The aim is to de­sign sys­tems able to reach a given goal, as in­de­pen­dently as pos­si­ble, given a de­scrip­tion, as gen­eral as pos­si­ble, of what they can pos­sibly do. If it seems too gen­eral to you, it’s be­cause it is. The problem is ex­actly that of being able to per­form tasks as di­verse as pos­sible with a rea­soning process as gen­eral as pos­si­ble. This gen­er­al­ity, and flex­i­bil­ity, means the problem in its gen­eral terms can be very hard: the eas­iest vari­ants are PSPACE-com­plete. De­spite the the­o­ret­ical com­plex­ity, plan­ning sys­tems have been very suc­cessful in prac­tice, being able to handle huge prob­lems. The tech­nology is used every­where, from man­aging air­ports to han­dling lo­gis­tics, from in­dus­trial ro­botics to man­age­ment space op­er­a­tions.

A planner typ­i­cally ex­pects as input a de­scrip­tion of its en­vi­ron­ment, and of how it can in­teract with it, i.e., what are the ad­mis­sible ac­tions that can be per­formed to op­erate in­side this en­vi­ron­ment. The second part of the input is the goal, i.e., what we want it to do. The output can be, de­pending on the prob­lem, a list of ac­tions to per­form to ob­tain the goal, or a more generic recipe that also ex­plains how to react to the en­vi­ron­ment, that often can be a priori un­pre­dictable. The input the planner ex­pects has, of course, to be ex­pressed in some syn­tax, a do­main de­scrip­tion lan­guage, and as it turns out, the choice of this lan­guage has pro­found ef­fects on the problem it­self. The lan­guage has to be flex­ible enough to allow you to ex­press all the de­tails that are needed to model your prob­lem, but con­strained enough for the problem to be solved in a rea­son­able amount of time. The trade-off be­tween ex­pres­sive­ness and com­pu­ta­tional com­plexity is a clas­sical topic in com­pu­ta­tional logic: every time you in­crease the ex­pres­sive­ness of your lan­guage you have to pay for it in terms of how dif­fi­cult it is to deal with it. This can go to the ex­tremes, since if you add too much, the problem may be­come too com­plex or even un­de­cid­able, while if you con­strain it too much, you won’t be able to model and solve re­al-­world prob­lems.

So here it comes my Ph.D. the­sis: trying to settle some ques­tions about ex­pres­sive­ness and com­pu­ta­tional com­plexity of a cer­tain class of plan­ning prob­lems known as time­line-based plan­ning prob­lems. Ba­si­cally, common plan­ning lan­guages reason about what a single ex­ecutor can do. Toggle that switch, lift that load, move that piece, switch on the en­gine, etc. They are quite im­per­a­tive in style, even if ex­pressed very de­clar­a­tively. In con­trast, time­line-based plan­ners reason about the whole system as a set of com­po­nents, mod­el­ling them sep­a­rately, and then com­pose their be­havior with a set of tem­poral con­straints that de­scribe how the com­po­nents in­teract with each other. This par­a­digm is more com­po­si­tional, and al­lows us to model more easily do­mains where a lot of dif­ferent com­po­nents have to in­ter­act. More­over, time­lines can be flex­ible, meaning that they can rep­re­sent un­cer­tainty in the be­havior of your com­po­nents, e.g., imagine that you know that you can go from $$A$$ to $$B$$, but you cannot know pre­cisely how much time it will re­quire. You can also model un­con­trol­lable com­po­nents: parts of the system that are not under your con­trol, maybe be­cause part of the en­vi­ron­ment it­self. All these fea­tures are very con­ve­nient in highly un­pre­dictable en­vi­ron­ments, and it is not a co­in­ci­dence that time­line-based plan­ners have been suc­cess­fully used in prac­tice for easy tasks like space mis­sions.

So what can I do about it? De­spite a lot of suc­cess in prac­tice, the the­o­ret­ical side of the equa­tion is less de­vel­oped than for more com­mon, ac­tion-based lan­guages. Little is known, for time­line-based plan­ning, on the the­o­ret­ical is­sues I told you above. So I spent my first year trying to un­der­stand the frame­work and then iden­tify some key ques­tions that still need to be an­swered: what is the com­pu­ta­tional com­plexity of the problem in its gen­er­al­ity? How much ex­pres­sive is it re­ally, com­pared with other ap­proaches? In other words, is this only syn­tactic sugar or there re­ally is more under the hood (tl;dr: yes, there is)? What is the log­ical coun­ter­part of the prob­lem? That is, we know clas­sical plan­ning is equiv­a­lent to sat­is­fi­a­bility checking of Linear Tem­poral Logic for­mu­las: what is the logic that cap­tures time­lines in­stead? Po­ten­tially in­ter­esting ques­tions are many, and I doubt to be able to finish every­thing in the next two years. The first re­sults are two pa­pers, one just ac­cepted to ICAPS, that show that a very con­strained ver­sion of the problem is al­ready very hard, i.e. EX­P­SPACE-com­plete, and that a cer­tain class of ac­tion-based plan­ning lan­guages can be ex­pressed through time­line-based prob­lems in a com­pact way (and the con­verse is not true).

How did I come to work on this stuff? Who knows me knows that I al­ways loved the the­o­ret­ical as­pects of Com­puter Sci­ence, but that I didn’t have these topics in mind until very re­cently, so what hap­pened? As un­likely as it may seem, every­thing started in Spain. During my Erasmus pe­riod in Va­len­cia, in 2014, I took some courses about Ar­ti­fi­cial In­tel­li­gence, specif­i­cally Neural Net­works and Ma­chine Learn­ing. It was a re­ally cool topic, and when I re­turned to Udine I tried to link it to every­thing I was do­ing. So as it often hap­pens, by chance, I asked the right ques­tion to the right peo­ple. In that pe­riod, prof. Mark Reynolds from the UWA was vis­iting Udine to give a course about Tem­poral Log­ics, as part of the Au­to­matic Sys­tems Ver­i­fi­ca­tion course held by prof. An­gelo Mon­ta­nari. I al­ways liked a lot logics and au­tomata, and I was en­joying the course very much, but that year in my head was the year of AI, so I asked what, in hind­sight, must have felt like a very naive ques­tion: are there any links be­tween these topics and AI? Of course there are! I was pointed to a paper by, among oth­ers, Marta Cialdea Mayer, from the Uni­ver­sity of Roma Tre, and An­drea Or­lan­dini, from the italian Na­tional Re­search Coun­cil, which showed the en­coding of clas­sical plan­ning into LTL that I men­tioned be­fore. This was the starting point for my master the­sis, of which Marta and An­drea were co-­su­per­vi­sors. This ex­pe­ri­ence turned into a fruitful col­lab­o­ra­tion that con­tinues today with our joint work on time­line-based plan­ning – we worked to­gether on both the pa­pers above – which is one of their core ex­per­tise area.

The 2014 visit of Mark Reynolds was also the op­por­tu­nity to know about one of his re­cent works, on a new tableau-based pro­ce­dure for checking the sat­is­fi­a­bility of Linear Tem­poral Logic for­mu­las. It was a new way of solving a problem that was al­ready been studied for a long time, which seemed very promis­ing. A col­league of mine, Matteo Bertello, as the exam for that course, worked on im­ple­menting it in C++, cre­ating a tool he called Leviathan, and he spent little less than a year doing an amazing job op­ti­mizing it for per­for­mance. By when he fin­ished, I was just starting my Ph.D., and we man­aged to pub­lish the re­sults in a joint paper where I han­dled the ex­per­i­mental eval­u­a­tion of the per­for­mance of the tool and the writing of the ac­tual man­u­script. That was ba­si­cally my first pub­lished paper (I had a pre­vious pub­li­ca­tion, but without being in­volved in writ­ing), and con­sid­ering I had been there for only three months, it was a very good start. Pub­lishing it also im­plied a very nice side ef­fect, that I will tell you later. Even if tableau-based sys­tems for LTL do not be­long to my main re­search topic, it’s a very in­ter­esting one. Re­cently, a year later, Mark Reynolds re­turned vis­iting us again, and we man­aged to submit a second pa­per, which has just been ac­cepted to LPAR-21, and to col­lect ma­te­rial for at least two more. Starting as a side pro­ject, the work on this topic has be­come a rel­e­vant part of my re­search ac­tiv­ity.

### Side projects without side ef­fects

Talking about side pro­jects, you know I cannot reason on only one thing at the time, so I’ve been busy doing a lot of things in the mean­time, fact that also caused some trou­bles. Last year I went to Meeting C++ to give a talk about Func­tional Pro­gram­ming in C++. I know, I promised a fol­low-up post a year and a half ago. Please be pa­tient. Still talking about C++, a men­tion goes to our C++ User Group Udine, which is now three years old. We had a meeting in De­cember and it was great, with nearly forty people and great guests giving talks. Somehow I man­aged to find time to handle the lo­gis­tics of that meet­ing, and I’m quite proud of our group.

Still talking about func­tional pro­gram­ming, at the be­gin­ning of the year I was also in­volved in a side project that al­lowed me to learn a lot about the in­ter­nals of GHC, the Glasgow Haskell Com­piler. The project in­volved the STM li­brary, a pop­ular con­cur­rency li­brary that im­ple­ments Soft­ware Trans­ac­tional Memory in Haskell, and re­sulted into the bach­elor thesis of a tal­ented friend of mine, Valentino Pi­cotti, also an ac­tive member of the C++ User Group. What is the STM? Using this li­brary, you can run con­cur­rent code in­side trans­ac­tions with guar­an­tees that re­semble those pro­vided by data­base sys­tems: atom­icity, con­sis­tency, iso­la­tion, and dura­bility (com­monly called ACID guar­an­tees). In prac­tice, this means that in­stead of having to lock some syn­chro­niza­tion prim­i­tive be­fore ac­cessing a shared vari­able, you can just en­close your mu­tating code in­side a atomically block, and let the run­time do the dirty job:

### School’s out for summer

I hope Alice Cooper won’t take it per­son­ally, but some­times one has to go to school during sum­mer, too. In­deed, no survey of my summer jour­neys would be com­plete without men­tioning ESSLLI 2016, the 28th Eu­ro­pean Summer School in Logic, Lan­guage and In­for­ma­tion: two in­tense weeks of courses on trans­versal dis­ci­plines ranging from com­pu­ta­tional logic, to nat­ural lan­guage se­man­tics, to phi­los­o­phy. Lo­cated in the won­derful dolomitic land­scapes of Bolzano, this summer school was a re­ally for­ma­tive ex­pe­ri­ence. The courses were in­ter­esting and held by in­ter­na­tion­ally rec­og­nized ex­perts in their fields, but among all of them, one was en­light­en­ing: So­cial Choice Theory for Lo­gi­cians. Eric Pacuit is a very tal­ented lec­turer, and, most im­por­tantly, his lec­tures con­stantly trans­mitted his pas­sion for what he does. I’ve had heard of Ar­row’s the­orem1 be­fore, but I was barely aware of the ex­is­tence of So­cial Choice Theory as a field, be­fore at­tending this course. Af­ter, I wished to switch my re­search topic. Well, maybe I’m ex­ag­ger­ating a bit, but he def­i­nitely got every­body’s at­ten­tion.

Be­sides the in­ter­esting courses, the most im­por­tant at­trac­tive of ESSLLI is prob­ably the cross-fer­til­iza­tion be­tween dif­ferent dis­ci­plines. Among the stu­dents at­tending were com­puter sci­en­tists, lo­gi­cians, lin­guists, en­gi­neers, and even philoso­phers, coming from all around the world, from China to Uruguay. In this way, I’ve learnt things that I would have not come to know oth­er­wise. Did you know many lin­guists com­monly use lambda cal­culus to mod­eling nat­ural lan­guage se­man­tics? Nei­ther did I.

### TIME flies

In Oc­tober I went to Copen­hagen to present our first paper on the ex­pres­sive­ness of time­line-based plan­ning prob­lems at TIME 2016, the 23rd In­ter­na­tional Sym­po­sium on Tem­poral Rep­re­sen­ta­tion and Rea­soning. TIME is a small but well-­known con­fer­ence which aims to let ex­perts of very dif­ferent dis­ci­plines meet each other, under a common de­nom­i­na­tor, which is tem­poral rea­soning. The con­fer­ence is in­ter­ested in tem­poral rea­soning mo­ti­vated by or ap­plied to a broad va­riety of fields, ranging from AI to logic, to formal ver­i­fi­ca­tion, to data­bases. It fits per­fectly the kind of work that I’m do­ing, and it was the per­fect venue where to start pre­senting my re­search.

But this con­fer­ence in par­tic­ular had an im­por­tant emo­tional meaning to me. In 2014, when I was still a master stu­dent, I at­tended TIME 2014, held in Verona, in­vited by prof. Mon­ta­nari, who spon­sored the trip. That was a very trou­bling pe­riod for me, after re­turning from Spain, having to catch up with the ex­ams, to work at the the­sis, and, most im­por­tantly, to chose what to do next. TIME 2014 were the first time I at­tended an aca­d­emic con­fer­ence, and was a very en­light­ening ex­pe­ri­ence. Two years later, I were at TIME 2016, pre­senting the first little steps of my own re­search, feeling like I was closing the circle in some way. Time flies, and this feeling re­mem­bered me why mine was spent well. It was a very little step, all con­sid­ered, but it was worth all the ef­fort. A thing is sure: I would not have been able to do any­thing without the sup­port of many spe­cial peo­ple: my fam­ily, my friends, but es­pe­cially my girl­friend: to stand any bat­tle, even the smallest one, we all need someone to fight for.

## Next round

While I was starting writing this post, I thought to have just a few thing to say, just to re­alize later how many things can be done in a single year. For in­stance, I didn’t even men­tion other two con­fer­ences and an­other summer school that I at­tended, but this post al­ready grew too long. Maybe, in­stead of waiting an en­tire year to find time to write up some­thing, I will be able to come to this blog more of­ten, as orig­i­nally in­tended. Maybe ;)

1. Sadly, the von Neu­mann Theory Prize and Na­tional Medal of Sci­ence winner Ken­neth Arrow passed away less than two months ago.