Writing a basic Qt project with Qt Creator

< previous step | index | next step >

I’m assuming you are already able to program and at least had a look at C++. For example I won’t explain why int main(int argc, char *argv[]) is in the C++ source code.

If you create a „Qt Console Application“ you will have the following code:

The line QCoreApplication a(argc, argv) will pass the command line parameters over to the Qt-Framework and initialize it. After that you can use the framework. It’s very rare, that any other commands are executed before this line.
To start the processing of events in the main event-loop a.exec() is called. This method doesn’t exit until said to do so. If no events are in the loop, it waits for new ones. To quit the program you can call QCoreApplication::quit() or QCoreApplication::exit() both methods are equivalent. They will enqueue an event to exit the event-loop. So even though the method returns immediately the program will not exit until all events enqueued before are executed. If you want to learn more about this topic it’s very helpful to visit the (well written) Qt(5) Documentation and search for the topic or to use Google instead.

Between QCoreApplication a(argc, argv) and a.exec() you normally do the initialization of your code. In case of a „Qt Widgets Application“ you’ll initialize the MainWindow.

This code will display an empty window. The missing header-file ui_mainwindow.h is one reason why Qt is so popular. It will be generated by Qt with the contents of mainwindow.ui. It contains all the layouts, buttons, views, basically everything that is displayed to the user. You can edit this file with a graphical WYSIWYG editor. This way your code has a separation of the ui and function by default. If you want, you can still use code to design your ui, you can even mix both, but you don’t have to. By using the ui-file you can eliminate a lot of boilerplate code; and as we all know less code equals less possible bugs.

Let’s add a button to our ui. First you need to open the ui-file. In my case it’s mainwindow.ui. Then it’s as simple as dragging the „Push Button“ to the position you like on your ui and drop it. There it is on the ui; and if you compile & run your app, it’s already on the real ui.
Now we change the text on the button. Just mark the button, then go to the bottom right of Qt-Creator and search for text. Change the content to what ever you like, I’ll change it to say hello. Compile & run and you see the change already happened.

But what if I want to display a text on the console if the button is pressed? No problem, just click right on the button and select Go to slot.... A pop-up will open and you can select a signal (= event), on which you want to execute something. In our case it’s clicked(). Confirm with OK and let Qt-Creator do it’s work. Now you have a method called on_pushButton_clicked() (if you left the objectName at its default value „pushButton“).
Now just write your code to print a line on the console: qDebug() <<"Hello World"; (and don’t forget to include QDebug). Compile & run your application. If you now press the button, Hello World is displayed on the console.
If you have problems with this tutorial, just ask in the comments or checkout the working example at github.com/nidomiro/BasicWidgetsApplication.

The best way to learn anything (in my opinion) is to just do it and see if it works the way you want. In programming you can do this easily. So just play around, checkout some existing Qt applications and change some code and see what the result is. And as a last tip for this post: if something doesn’t compile or link but should do so, try cleaning the project and run qmake again.

< previous step | index | next step >

Ubuntu: automatic password for second encrypted disk

I just encountered the problem that I have to type two passwords at startup, for two encrypted disks. My first disk is encrypted through the Ubuntu installer.
After some searching I found the perfect solution for that task. In German its called „Schlüsselableitung“, in English derived keys. But perfect solutions often have a big issue why they don’t work, like here. I’m using Ubuntu 16.04 which uses ´systemd´, and that has problems with derived keys. So I found the second most perfect solution for me, using a key-file. Some people argue that this is a security issue, but the derived key is also obtainable with root rights, just like a key-file. And by the way, your private keys of your certificates are also stored on that disks and nearly nobody complains about that.

I assume the following setup:

  • You are using LUKS
  • Your whole system is encrypted with the option you can select at installing Ubuntu (or similar, but home directory encryption does not count here). Otherwise your key-file is accessible by anyone with physical access to your computer.
  • You have a second disk that is already encrypted (sda1 in my case)
  • You will bother with the actual mounting yourself

So let’s do it. First we create a key-file with 4069 Bit, which should be enough.

Next we should forbid any access for any user except root. Otherwise the bad guys can steal the key-file and encrypt your sensible data.

Now we add the key-file to the second disk, ironically it’s sda1 in my case. In order to do that, you have to type in one password that is capable to decrypt the drive.

The only thing that is missing right now, is the automatic part. If you would reboot yet, you would see no difference. For the automatization we need the PARTUUID or UUID of the drive. The difference between those both is, that the PARTUUID stays the same, even if you format the drive. If you don’t see a PARTUUID you are not using GPT as partition table, but you can use the UUID instead.

Now the only thing that’s left to a (multiple) password entering free world is to add the magic line to /etc/crypttab. I use the PARTUUID but you can replace it with the UUID if you want (or need).

After a reboot you will be prompted for one password and your second encrypted disk should be listed in /dev/mapper/ as data_lux. Of course, you can change that name. If you want to, just replace any data_lux in this tutorial with your favorite name.
From here I use the automatic mount options of KDE to mount the disk at startup.

My Sources:
https://wiki.ubuntuusers.de/LUKS/ (German)
https://wiki.ubuntuusers.de/LUKS/Schl%C3%BCsselableitung/ (German)
https://www.martineve.com/2012/11/02/luks-encrypting-multiple-partitions-on-debianubuntu-with-a-single-passphrase/

Qt Signals & Slots: How they work

< previous step | index | next step >

The one thing that confuses the most people in the beginning is the Signal & Slot mechanism of Qt. But it’s actually not that difficult to understand. In general Signals & Slots are used to loosely connect classes. Illustrated by the keyword emit, Signals are used to broadcast a message to all connected Slots. If no Slots are connected, the message „is lost in the wild“. So a connection between Signals & Slots is like a TCP/IP connection with a few exceptions, but this metaphor will help you to get the principle. A Signal is an outgoing port and a Slot is an input only port and a Signal can be connected to multiple Slots.
For me one of the best thins is, that you don’t have to bother with synchronization with different threads. For example you have one QObject that’s emitting the Signal and one QObject receiving the Signal via a Slot, but in a different thread. You connect them via QObject::connect(...) and the framework will deal with the synchronization for you. But there is one thing to keep in mind, if you have an object that uses implicitly sharing (like OpenCV’s cv::Mat) as parameter, you have to deal with the synchronization yourself.
The standard use-case of Signals & Slots is interacting with the UI from the code while remaining responsive. This is nothing more than a specific version of „communicating between threads“.
Another benefit of using them is loosely coupled objects. The QObject emitting the Signal does not know the Slot- QObject and vice versa. This way you are able to connect QObjects that are otherwise only reachable via a full stack of pointer-calls (eg. this->objA->...->objZ->objB->recieveAQString()). Alone this can save you hours of work if someone decides to change some structure, eg. the UI.

Right now I only mentioned Signal- & Slot-methods. But you are not limited to methods – at least on the Slots side. You can use lambda functions and function pointers here. This moves some of the convenience from languages like Python or Swift to C++.

For some demonstrations I will use the following classes:

Using Connections

To connect a Signal to a Slot you can simply call QObject::connect(a, &AObject::signalSometing, b, &BObject::recieveAQString) or QObject::connect(a, SIGNAL(signalSometing(QString), b, SLOT(recieveAQString(QString)) if you want to use the „old“ syntax. The main difference is, if you use the new syntax, you have compile-time type-checking and -converting. But one big advantage of the „old“ method is that you don’t need to bother with inheritance and select the most specialized method.
Lambdas can be a very efficient way of using Signals & Slots. If you just want to print the value, e.g. if the corresponding property changes, the most efficient way is to use lambdas. So by using lambdas you don’t have to blow up your classes with simple methods. But be aware, that if you manipulate any object inside the lambda you have to keep in mind, that synchronization issues (in a multithreaded environment) might occur.

You will get an idea of how to use the different methods in the following example:

As you see, recived a QString:  "Hello" is printed two times. This happens because we connected the same Signals & Slots two times (using different methods). In the case, you don’t want that, you see some methods to prohibit that and other options in the next section Connection Types.

One side note: if you are using Qt::QueuedConnection and your program looks like the following example, at some point you will probably wonder, why calling the Signal will not call the Slots until app.exec() is called. The reason for this behavior is that the event queue, the Slot-call is enqueued, will start with this call (and block until program exits).

And before we start with the next section here is a little trick to call a method of another thread inside the context of the other thread. This means, that the method will be executed by the other thread and not by the „calling“ one.

To learn more about that here is your source of truth: https://doc.qt.io/qt-5/qmetamethod.html#invoke

Connection Types

Qt::AutoConnection

Qt::AutoConnection is the default value for any QObject::connect(...) call. If both QObjects that are about to be connected are in the same thread, a Qt::DirectConnection is used. But if one is in another thread, a Qt::QueuedConnection is used instead to ensure thread-safety. Please keep in mind, if you have both QObjects in the same thread and connected them the connection type is Qt::DirectConnection, even if you move one QObject to another thread afterwards. I generally use Qt::QueuedConnection explicitly if I know that the QObjects are in different threads.

Qt::DirectConnection

A Qt::DirectConnection is the connection with the most minimal overhead you can get with Signals & Slots. You can visualize it that way: If you call the Signal the method generated by Qt for you calls all Slots in place and then returns.

Qt::QueuedConnection

The Qt::QueuedConnection will ensure that the Slot is called in the thread of the corresponding QObject. It uses the fact, that every thread in Qt ( QThread) has a Event-queue by default. So if you call the Signal of the QObject the method generated by Qt will enqueue the command to call the Slot in the Event-queue of the other QObjects thread. The Signal-method returns immediately after enqueuing the command. To ensure all parameters exist within the other threads scope, they have to be copied. The meta-object system of Qt has to know all of the parameter types to be capable of that (see qRegisterMetaType).

Qt::BlockingQueuedConnection

A Qt::BlockingQueuedConnection is like a Qt::QueuedConnection but the Signal-method will block until the Slot returns. If you use this connection type on QObjects that are in the same thread you will have a deadlock. And no one likes deadlocks (at least I don’t know anyone).

Qt::UniqueConnection

Qt::UniqueConnection is not really a connection type but a modifier. If you use this flag you are not able to connect the same connection again. But if you try it QObject::connect(...) will fail and return false.

This is not everything you will ever need to know about Signals & Slots but with this information you can cover about 80% of all use-cases (in my opinion).
If it happens and you need the other 20% of information, I’ll give you some good links to search your specific problem on:

The Qt documentation:
https://doc.qt.io/qt-5/signalsandslots.html
Very deep understanding:
Part1: https://woboq.com/blog/how-qt-signals-slots-work.html
Part2: https://woboq.com/blog/how-qt-signals-slots-work-part2-qt5.html
Part3: https://woboq.com/blog/how-qt-signals-slots-work-part3-queuedconnection.html

< previous step | index | next step >

How to start with Qt?

In this series I’ll give you a starting point on working with with Qt. Like I mentioned in Why I love the Qt framework I had a hard time at the beginning. I want to give you an easier start with this awesome peace of technology.
This page will serve as an index for the whole series of tutorials and explanations. As more posts follow this page will be updated.
I know there are plenty of tutorials on Qt, but maybe I’ll explain some things in a way you understand better.

  1. Writing a basic Qt project with Qt Creator
  2. Signals and Slots: How they work

Should I use Qt containers or the std ones?

If you come from plain vanilla C++, you only know the C++ Standard Library with its containers like std::vector. If you know how to use them, you can accomplish your tasks pretty fast. But if you’re coming from an other language, the naming might be a bit odd. The Qt containers offer both, C++ style and Java style naming of methods. So coming from an other language the Qt ones might be easier to use.
The Qt containers use the COW (Copy on Write) technique to be able to create cheap copies of them. This technique uses an internal container to store all the data. This container is accessed by a pointer. If you copy the instance of e.g. the QList you get a new QList instance with a pointer to the exact same internal container. You can use the new list exactly like the old one. The internal container stays shared while the list is not modified. But any operation that modifies one of the lists, e.g. adding or removing of elements, will result in a detach before the modifying operation is performed. Detach means here, that the whole internal container is copied. After the detach you have two completely independent lists with two independent internal containers.

So if you pass a QList to a function or method (by value), the act of copying will be very cheap. As long as you don’t modify the list inside the function or method, you have saved yourself some RAM.

The std containers doesn’t use this technique to be fast and to know exactly how long each operation will take. If you pass a std::vector to a function or method (by value) the whole vector will be copied.

But be careful, Qt containers can have some strange behavior if you use them with C++14. If you use the new for(Element e: list) syntax, on a QList (that has been copied) you cannot be sure if it copies the internal container, because using the new syntax could modify it.

Right now I didn’t answer the question, which containers are the best ones to use. And in fact there is no golden answer to this question; there maybe never will. So the answer I can give you to this question today, is the same you maybe heard many times before at other topics: It depends. Obviously it’s overkill to use the Qt framework just because of its containers.

Maybe it’s better to use std, in fact many of the Qt containers use the std containers under the hood. But if you interact with the Qt library you will see, that you will tend to use the Qt containers. Just because it’s easier to use them, as to convert back and forth. And yes I know, some people recommend to use std instead of Qt as much as possible. Personally I use both, Qt for convenience and std if speed is an issue. The one thing I’m pretty sure about, is that there is no ultimate answer to this question.

[EN] Why I love the Qt framework

Everyone that knows me, knows that I love the Qt framework. Before I started programming in C++, Java was my primary programming language. I love the generics (yes, some of you will hate me for that opinion right now) and reflection. During my Java-time I used them very often to increase reusability.
But while studying we had do learn C++ and I hated it in the beginning. It felt so old and so stiff compared to Java. But in one lecture we used Qt, and it was even more terrible – in the beginning. Signals, Slots, QWidgets, … it was to much. After working with it for some time it felt better. What I really liked about Qt from the beginning was the Qt Designer. For the first time I was able to have fancy ui’s in C++ and don’t have to bother with a C api. In Java I had a graphical editor for Swing. So creating ui’s was now as easy as in Java, maybe a bit easier. But the results with Qt are much better and fancier.
After some reading and experimentation I began to understand how Signals and Slots work. From now on I really liked Qt. It was awesome. It was so easy to accomplish tasks, and it still is. If you build an application with an ui and some heavy stuff to calculate, or simply said two threads to communicate, it is very easy with Qt. Just use Signals and Slots to communicate between the threads and the Qt framework will deal with all the synchronization stuff (If you use them properly). You can also use Signals and Slots to loosely couple objects. The sender-object that emitted the Signal, does not know where it is going and the receiver-object does not know where it came from. So you easily exchange components, just like with dependency injection.
If I write an application and use the C++ standard library and Qt, I can be sure that it will compile and run on all the mayor platforms. A few months ago I switched from Windows to Linux. The reason why I did that can be found here Warum ich zu Linux wechsle if you can read German. I was so excited when I opened the project file of my Qt project. I developed the whole project in Windows and with a click of the „Compile & Run“ button it was running under Linux.
But the portability is not the only advantage. You have a huge Library with many classes solving different problems. If you want to use for example Network, no problem, just use it. The same goes for JSON support, XML, SQL, Bluetooth, OpenGL and many more. It’s just so convenient, you don’t have to search for a library, try to compile it and link it to your project, Qt does all that for you. So you don’t have to reinvent the wheel every time.
At the beginning I mentioned reflection in Java. You can use reflection in Qt with the help of moc (Meta Object Compiler). So every class that inherits from QObject knows what class it is and other information at runtime.

The things I mentioned here are just a subset of what Qt is capable of. KDE is using Qt to build an entire desktop environment and other programs. And they are not the only ones using Qt, just have a look at the Wikipedia page of Qt and see companies like the European Space Agency, DreamWorks and Siemens.

[EN] How to work on your projects on multiple devices

At the beginning of my programming-life I’ve never thought of synchronization of my projects as an issue. Back at the time I only had a Computer standing in my room. Then I got a Laptop from the company I worked for back at the time. Still, synchronization was not an issue by now because I kept private and work separate. But the whole journey began when I started studying and I bought myself a Laptop. In the beginning I used an USB-drive to copy my workspace on, after I finished on one device, and copied it to the other machine then. It was simply ugly… After a few weeks I had hundreds of different versions, because I worked a bit on the Laptop, forgot to copy it to my Pc but wanted to save the possible progress I made.
The second step was a tool I knew from work: Mercurial, a version control system. Now I could just commit my changes and merge them. For syncing I used my own webserver, where I hosted a Mercurial server. No different versions of the same code, I thought was the result. But I was so wrong. In order to be in sync I needed to make an commit and push it to my server. There lies the problem. Sometimes I forgot to commit, sometimes I was just lazy and sometimes the code was not compilable, and I knew the „golden“ rule from work: „Commit only compilable code“. So there where still different versions of my code. The usage of an separate branch dedicated to my „in-development“ code didn’t fix the problem either. It fixed the problem with the rule but I still forgot to commit and push sometimes.
My current solution came with the „cloud era“. Since I’m concerned about privacy I don’t use the commercial clouds like Google Drive or Dropbox. First I hosted my own ownCloud (version 4 back then) server, after several issues with disappearing files I switched to Seafile. Seafile serves me as my cloud since then. I use it to synchronize nearly everything, but the biggest benefit is, that I can now automatically sync my workspace directory between my Computer, my Laptop and any device Seafile supports. As long as I’m connected to the Internet, my changes are automatically pushed to the server.
This is my current solution to the sync problem. But I’m thinking about switching fully or in part to syncthing. With syncthing the devices can sync directly over the fast local network. They would only use the webserver if no other device is connected to the same network. This method is especially good for people that have a bad Internet connection. Here I would think of a setup like this: A webserver, a server in the local network (like a Raspberry Pi) and the clients. If one client shares a huge amount of data, it would be synced to the local server very fast. Then you don’t need to keep your machine running until the upload finishes, the local server will do it for you.
Of course it’s up to you what you want to use. And for many people the existing commercial cloud services maybe a better solution, but not for me.

[DE] Warum ich zu Linux wechsle

Endlich steht meine Entscheidung fest, nach mehreren Jahren überlegen wechsle ich zu Linux und kehre Windows (10) den Rücken. Es war nicht leicht, jedoch hat Microsoft dank Windows 10 und dem Upgradezwang für Windows 7 – 8.1 diese Entscheidung quasi für mich getroffen.

Man kann mich, was Datenschutz angeht, teilweise schon paranoid nennen. Dementsprechend war ich von Anfang an skeptisch gegenüber der nicht abschaltbaren Übermittlung von Daten an Microsoft. Die „Wahl“ zwischen „Vollständig“ und „Einfach“ ist meiner Meinung nach keine wirkliche Wahl. Nirgends steht hier genau, welche Daten Windows 10 eigentlich an Microsoft sendet. Zählen die Bilder die ich anschaue, Texte die ich schreibe, welche Tasten ich zu welchem Zeitpunkt drücke, etc. auch zu den Nutzungsstatistiken? Einen kleinen Vorgeschmack, was an Daten anfällt, bekommen wir durch einen Blogeintrag von Microsoft: https://blogs.windows.com/windowsexperience/2016/01/04/windows-10-now-active-on-over-200-million-devices/ . Hier präsentiert Microsoft stolz, wie viele Minuten die Nutzer bereits mit Edge im Internet verbracht haben und wie viele Bilder mit der Windows Photo-app bereits angeschaut wurden – und das sind nur die Daten, die Veröffentlicht wurden…

Ich sehe meinen Computer, den ich selbst gekauft habe und in meiner Wohnung steht, als mein persönliches Hoheitsgebiet an. Wenn ich ein Programm installiere, dass alles auf meinem Computer ausspäht, dann ist das meine Entscheidung, ob ich es in kauf nehme, dass es das tut, oder eben nicht. Doch spätestens mit Windows 10 hat man keine Wahl mehr. Der Benutzer hat nicht einmal mehr die Wahl zu sagen, nein ich installiere keine Updates. Vordergründig sind die Argumente von Microsoft plausibel, man kann so schnell alle Computer updaten, wenn Sicherheitslücken behoben wurden. Was die meisten hierbei aber vergessen ist die Tatsache, dass hierdurch eine gravierende Sicherheitslücke geschaffen wurde. Microsoft, oder jeder der sich Zugang verschaffen kann, hat die Möglichkeit, beliebige Software auf jedem vorhandenen Windows 10 Computer zu installieren. Genau so eine Funktion wird in vielen Trojanern und Viren eingebaut, um die Kontrolle über einen Computer zu übernehmen und für eigene Zwecke zu verwenden. Man kann jetzt natürlich sagen, Linux bietet auch automatische Updates an, die man missbrauchen könnte. Damit hat man auch nicht unrecht, nur muss ich als Benutzer selbst das Update ausführen, es kann mir also niemand vorschreiben, dass ich updaten muss. Zudem sind alle Quellen, von denen die Updates bezogen werden, öffentlich einsehbar. Es hat also jeder zumindest die Möglichkeit, einen Missbrauch festzustellen.

Jetzt gehen wir mal davon aus, dass eine Staatliche Organisation wie z.B. die NSA zu Microsoft kommt und sie zwingt, über das Update-System ein Spionageprogramm auf jedem Windows Computer zu installieren. Das ist der Zeitpunkt bei dem viele jetzt sagen: „Ich habe ja nichts zu verbergen“. Das kann gut sein – ich persönlich habe auch nichts zu verbergen – nur würde ich es mir deswegen noch lange nicht gefallen lassen, dass irgendjemand Kameras in meiner Wohnung platziert. Auch wenn er nur schauen will, ob ich auch wirklich brav bin. Aber das Thema Datenschutz scheint heutzutage nur noch wenige Menschen zu interessieren, da ist es wichtiger, dass jeder weiß, was, wo und mit wem ich gerade esse. Aber das ist ein (etwas) anderes Thema.

Um die negativen Punkte, die ich bis jetzt herausgefunden habe abzuschließen, fällt mir noch ein Vergleich ein, den ein Bekannter vor kurzem gemacht hat. „Windows 10 ist wie die kostenlosen SIM-Karten aus dem Film ‚Kingsman: The Secret Service‘ “

Auch wenn ich das gerne behaupten würde, aber auch Linux ist bei weitem nicht Perfekt – aber was ist schon Perfekt. Ich verwende momentan Kubuntu und KDE Neon. Die Systeme sind leider nicht so „Bugfrei“ wie Windows das ist, jedoch muss man auch bedenken, dass hier keine komplette Industrie dahinter steht. Treiber für Hardware müssen meistens von Entwicklern außerhalb der Herstellerfirma in ihrer Freizeit geschrieben werden. Genauso wird die meiste Software von Entwicklern in ihrer Freizeit geschrieben. Dementsprechend funktionieren viele Geräte unter Windows besser als unter Linux. Es gibt jedoch auch viele Beispiele die das Gegenteil aufzeigen.

Ich muss zugeben, ich bin auch noch nicht komplett auf Linux umgestiegen und habe auf meinem Computer auch  immer noch Windows 7. Für meine Arbeit brauche ich das momentan noch, da ich hier Plugins für Windowsprogramme entwickle. Auch die Grafikleistung, die man beispielsweise zum Spielen benötigt, ist unter Windows noch deutlich besser wie unter Linux. Hier habe ich jedoch die Hoffnung, dass sich das durch AMDs neue Treiberpolitik und Valve bald ändern wird.

Für mich gibt es momentan also mehr Gründe Windows (10) nicht zu benutzen, als Linux nicht zu benutzen. Linux hat aber zudem eine Eigenschaft, die es stark von Windows abhebt: es ist OpenSource. Wenn man die Geschichte betrachtet hat der Grundgedanke von OpenSource uns erst soweit gebracht, wie wir jetzt sind. Ohne OpenSource würde es z.B.  das www oder E-Mail in seiner heutigen Form nicht geben. Schon früher hat der Mensch die Arbeitsweise seines Nachbarn kopiert, wenn er dadurch seine Arbeit schneller, effizienter und besser ausführen konnte; denn kopieren und verbessern heißt Fortschritt.

[EN] Installing Redmine 3.0 on clean Ubuntu 14.04

In this tutorial we will install Redmine on a clean installation of Ubuntu server 14.04 with an Apache server and MySql. Redmine wil be reachable under the subdomain redmine.example.com.

Here Redmine will be installed to  /var/www/vhosts/redmine . I use unzip  to unpack the archive. The placeholder for the username you’re logged into the system is  $sysUser$ .

Step 1 – Installing required software

First we need to update our packages. If you encounter any problems later on try to fix them by updating your packages again.

Now install apache2 and mysql:

While installing mysql-server you will be asked for a MySql root password to set.

Step 2 – Database creation

To create the database log in as root user with the password you set earlier.

The following MySql queries create the database and the user to access this database. Be sure you have my_password replaced with an other password (not the root password) before executing.

To exit  mysql simply type  exit and execute.

Step 3 – Downloading Redmine

Now switch in your vhosts directory download Redmine and unzip it. If you want you can use the .tar.gz file with the tar command as well.

Step 4 – Database configuration

We need to alter the file /var/www/vhosts/redmine/config/database.yml.example  and save it as  database.yml . Replace my_password with the password set in the MySql query before.

If your MySql server is running on another Port (eg. 3344) you can use the following config:

Step 5 – Installing ruby

First we have to remove the old version of ruby.

Execute the following command (not as root user) with the ‚\‘ at the beginning. It will install ruby at your system, usable for every user in the group rvm. The installation folder is  /usr/local/rvm .

In my case the first run failed because of a missing signature. The command to install this signature will be displayed in the error-message. In my case it was this:

Then rerun the command to install ruby.

After some time ruby is installed with rails. Now the users $sysUser$ and www-data  need to be added to the group  rvm .

After a relogin the change should be applyed. You have to switch folders back t your Redmine folder.

If you encounter any problems with mysql during installation execute  sudo apt-get install libmysqlclient-dev.  Then execute the command again.

Step 6 – Configuring Redmine with ruby

Generate an secret cookie token:

Create the database structure:

Insert the default configuration:

Start the webrick server to test if everything is ok. You can access the webrick server from localhost only, so its ok if it throws no errors while starting. Then shutdown the webrick server.

 Step 7 – Installing passenger

Because the repository version of passenger will install the old ruby from the repository again, we have to install it via gem.

Now we have to install some other packages, required for passenger integration in apache2:

Then execute the following commands to install the passenger module to apache2.

 

Alter the file /etc/apache2/mods-avaiable/passenger.load  to this:

And the file /etc/apache2/mods-available/passenger.conf  to this:

Now enable the passenger mod:

Step 8 – Configuring Apache

To reach redmine under redmine.example.com we have to create a new file in /etc/apache2/sites-avaiable . I called the file redmine.example.com.conf .

To activate the site execute  a2ensite redmine.example.com.conf and rename the file /var/www/vhosts/redmine/public/dispatch.fcgi.example  to dispatch.fcgi . Now restart apache2  via  sudo service apache2 restart .

Everything should be up an running. If you have any suggestions to improve the installation process comment below.

 

Sources: