You can find a page with project demos here.
Archive
Implementation and Comparison of Different Reinforcement Learning Approaches for Chess (SoSe 2019)
Chess is a two-player strategy board game played on a checkered gameboard with 64 squares arranged in an 8x8 grid. Due to its popularity there have been many different approaches to implement AI opponents. In this project we focus on applying reinforcement learning. The goal is to implement two different algorithms, which will eventually be able to play against each other. One of them will be deep reinforcement learning while the other will be a simpler approach such as Q-Learning or TD(λ). Both of these will learn by playing against themselves for a certain number of times.
For the implementation we will use OpenAI Gym and python-chess. OpenAI Gym is a toolkit for developing and comparing reinforcement algorithms and will enable us to use it for the training of our own algorithms. python-chess is a python library containing the complete
logic of the game, from setting up the 8x8 board, to making moves and finishing a game. It also enables to print the current state of the board in ASCI format, enabling limited visualization out of the box. We will use python-chess to create an environment for OpenAI Gym.
Participants
Sidney Hartmann, Niklas Winkemann
InterUniversal File System (SoSe 2019)
As part of our bachelor project, we have built a prototype of a project called ,,IUFS” which is inspired by the "InterPlanetary File System” (IPFS). Within the scope of the base.camp, we plan to extend the project by further developing an existing prototype of a distributed, peer-to-peer file system that is based on a distributed hash table, in order to provide fast and reliable content delivery for files (e.g. article images) that are too large to be stored in any blockchain. We will especially focus on scalability of the network as well as robustness in terms of data integrity and availability.
The idea of the IUFS came up in the module ,,Projekt Einführung in die Entwicklung verteilter kontextbasierter Anwendungen” while we were developing an application called ,,BlogChain” that runs on the Cadeia blockchain 1 of the VSIS team. The project has features similar to Steemit, however, it does not incorporate a cryptocurrency. During the development of the ,,BlogChain” we learned that it is impractical to store files, e.g. article images, in the blockchain, so we looked for a solution to store files, which are needed in the context of blockchain, persistently. Since we didn’t want to heavily compress the uploaded files and also didn’t want to compel users to trust on possibly non-distributed third-party file hosting services, we decided to prototypically implement our own, distributed file system under the working title ,,InterUniversal File System” (IUFS). When the semester was over, we had written a working prototype of the ,,IUFS” which of course has much potential for improvement. The basic framework of our project is a distributed hash table (DHT) and it is inspired by the ,,InterPlanetary File System” (IPFS).
Participants
Kevin Röbert und Arne Staar
Automatic animal identification on farms for customer transparency (WS 2018/19)
This project develops a system that enables customers to see the animals that were used in the product. A customer in a store scans a tag on the product and sees corresponding pictures of animals and their living conditions. This tool also helps farmers to promote their farms by offering full transparency to the consumer.
In this project, we collect images and videos of animals on a farm. Using object recognition and optical character recognition (OCR) the animals are identified by their ear tags. This data is fed into a mobile web application that allows customers to retrieve photos of the animals by tag number.
Upon successful completion of the project, we plan to automate the process of taking pictures, recognizing animals by tag and storing them. The idea is to install an automated camera system on the farm that uploads the images into the web application for processing, so that data is readily available for the farmer and the customer.
Participants
Thorben Willert
VR Prototyping Lab (WS 2018/19)
Virtual Reality wird den Zugang zu Inhalten verändern. Doch noch ist offen, welche neuen Businessmodelle sich durchsetzen werden und welche neuen Wege es zur Monetarisierung von Content geben muss, um mit VR zukünftig Geld zu verdienen.
Genau hier möchte nextMedia.Hamburg mit dem VR Prototyping Lab ansetzen und Hamburger Unternehmen eine Innovationsinfrastruktur zur Verfügung stellen.
Als Standortinitiative der Hamburger Medien- und Digitalwirtschaft möchte nextMedia.Hamburg Studierenden der Hamburger Hochschulen die Möglichkeit bieten, in enger Zusammenarbeit mit Unternehmen und dem Verein nextReality.Hamburg mit VR zu experimentieren, Marktideen zu entwickeln und einen Prototypen zu bauen und zu testen.
Im Laufe des Wintersemesters 2018/19 wird im Rahmen des Projekts Know-how ausgetauscht und die notwendige Hardware zur Verfügung gestellt, um in einem Co-Innovationsprozess VR-Projekte umzusetzen. Das Lab wird im brandneuen VR Transfercenter im Kreativspeicher M28 in der Speicherstadt stattfinden und sich in den Themenfeldern Medien, Entertainment, Werbung/Marketing und Games bewegen.
Participants
L. Kruse, V. Eckhardt und F. Meyer
Machine Comprehension Using Commonsense Knowledge (WS 2018/19)
Ambiguity and implicitness are inherent properties of natural language that cause challenges for computational models of language understanding. In everyday communication, people assume a shared common ground which forms a basis for efficiently resolving ambiguities and for inferring implicit information. Thus, recoverable information is often left unmentioned or underspecified. Such information may include encyclopedic and commonsense knowledge. This project focuses on commonsense knowledge about everyday activities, so-called scripts. It is built upon the task defined in SemEval-2018 Task 11. A software application will be written to pick the best answer to questions referring to specified text, with focus on commonsense knowledge.
Particpants
Andy Au (TUHH)
Pun Detection and Location (WS 2018/19)
Pun, a form of wordplay, uses ambiguity of words to achieve an intended humorous or rhetorical effect. This deliberate effect is one kind of communication act, which has a number of real-world applications. For example, it can make conversations very human-like. Also, features like recognizing humorous ambiguity can enhance user experience in automatic translation for non-native speakers. If a computer can find out humor, it can better ”understand” human language.
A pun usually exploit polysemy, homonymy, or honological similarity to another sign. For instance: I used to be a banker but I lost interest. In this case, the word ”interest” shares the different meanings, thus this sentence contains a
pun. The aim of this project is to detect if the context contains a pun, and if so, which word is the pun, i.e. locate the pun
Particpants
Eva Jingyuan Feng (TUHH)
VR Game Jam (WS 2018/19)
The Global Game Jam (GGJ) is the world's largest game jam event (game creation) taking place around the world at physical locations. Think of it as a hackathon focused on game development. It is the growth of an idea that in today’s heavily connected world, we could come together, be creative, share experiences and express ourselves in a multitude of ways using video games – it is very universal. The weekend stirs a global creative buzz in games, while at the same time exploring the process of development, be it programming, iterative design, narrative exploration or artistic expression. It is all condensed into a 48 hour development cycle. The GGJ encourages people with all kinds of backgrounds to participate and contribute to this global spread of game development and creativity.
The Human-Computer Interaction group organizes a jam site in the XR LAB at Informatikum Hamburg especially for all jammers interested in creating Virtual Reality and Augmented Reality games. We offer a collection of VR/MR/AR devices for the jam, e.g., several HTC Vive systems, Oculus Rifts, a HoloLens and a CAVE system. Students of the Department of Informatics are able to earn 3 Credits (3LP) when participating at the jam.
Participants
Results
Cyber Range (WS 2018/19)
Intrusion Detection Systems (IDS) and Intrusion Prevention systems (IPS) are supposed to prevent sophisticated attacks against the network and environment they are deployed in. For this purpose, they are getting more and more ”‘smart”’ and have built-in Machine learning. But with more automatically learning it gets harder to verify if the system detects advanced and unknown attacks.
Platforms used for teaching and even more in online Attack/Defense competitions (for example international CTFs) lots of attacks are carried out and/or tried. In a CTF scenario, the teams do not know how to proceed initially. Potentially this leads to lots of data about previously unknown vulnerabilities or combinations of them used to illegally export data.
If we combine both aspects it should be possible to use a CTF or training scenario to test IDS and IPS in an almost real scenario. With IPS systems it is even possible to integrate the IPS into the scenario to be exploited. If a good monitoring and ground truth captioning system are integrated into the platform, it is possible to analyze why the IDS/IPS did not trigger on the attack, and maybe also to replay the whole attack to test adjusted rules and training. Testing and evaluation of different IDS/IPS systems can be done on this platform.
In the project and an accomodating MA thesis, the platform should be developed using existing open-source software as far as possible. Then some exemplary scenarios should be implemented and tested. In the end, an evaluation with real IDS/IPS systems should take place, maybe with bigger competition testing all aspects of this work.
Design Goals
Easy start
The platform as a whole should enable users to start easily and participate in cybersecurity challenges. A web interface should be used to chose and start a predefined scenario as well as display a description what to do, give feedback and possibly offer some help.
Intuitive User Interface
It should be simple for the users to access the resources from the scenarios. the easiest way would be via some web based shell emulation or Remote desktop like access via the Web browser. For more advanced users and/or Scenarios a VPN access can be used.
Sharing
It should be possible to export and import scenarios. This would allow to share a scenario with different instances of the platform and maybe build a marketplace for challenges. It should also be possible to take one scenario as a base and extend it by, for example, adding some resources.
Extensibility
Defining new scenarios should be aided by a library of existing components like OS images, applications, and monitoring software. It should also be possible to add additional IDS/IPS systems to existing scenarios to test the response in an already specified environment or even attach them to a replay. An other direction of extensiblity would be adding support for different architectures and processors. With it it would be possible to simulate arm based IoT devices or smartphones. It would add possibilities too if simulated or real industry components like SCADA or Modbus systems can be integrated.
Complete Visibility
The monitoring of the platform should capture all events necessary to start a replay of all actions, so a retest for the tested IDS/IPS is possible. It should also provide data to the instructor how the teams are doing and if possible determine how far each team is in the progress of the scenario. Also, a dashboard showing the progress should be available for the instructor and/or the user.
Multiple Teams
It should be possible to use the platform as a team. Optionally the possible to do Red/Blue Team scenarios on the platform, e.g. one team attacking and one defending the infrastructure can be built. For some scenarios to work, some dummy use of the systems is needed. For example, users logged in on a windows machine to get the auth hashes of the logged in users for lateral movement in the network. This can be provided by the instructors manually, as a Green Team or via recorded actions and network dumps.
For bigger CTF style competitions it would be possible to have each team attack a separate instance of the scenario and only share a scoreboard. But it could also be interesting if each team is the defending (Blue) Team in one instance and attacking all other teams. This scenario is also known as Attack and Defense CTF.
Networking
Each scenario involving more than one host has to have a separated, own virtual network. It has to be possible to have scenarios with more than one network to simulate multi-network setups like a DMZ or a separate office and production network. This would allow having scenarios where hopping from one network to another is necessary.
Scalability
As the platform should be usable with multiple teams at once doing complex scenarios it is necessary that the platform could be scaled to more than a single server per component, especially to execute the target networks on different VM hosts. This would make it possible to add more executing servers quite simple. If the networks of different teams should be joined to attack each other a private network spanning the executing hosts is necessary.
Security
As it is the purpose of the platform to have components of it hacked and exploited it is necessary to ensure that the attackers are confined into the targets and can’t gain unprivileged access to the platform itself. Also the different users have to be shielded from each other so no user can harm other users of the platform.
Participants
Nils Rokita
ThumbnailAnnotator, a user-driven, visual and textual, human-in-the-loop word sense disambiguation tool (WS 2018/19)
A very active topic in the natural language processing (NLP) research area is the task of word sense disambiguation (WSD). The main goal of WSD is the disambiguation of multiple, possible meanings of a word in a specific context. For example the word ‘python’ might refer to a snake or the programming language. Depending on the context it is used in, it is usually easy for humans to identify the author’s intended meaning.
For computers, this is a non-trivial task which is still not fully solved. There are many different methods to accomplish WSD but basically they can be divided into supervised or unsupervised learning algorithms, whereby both of them usually require large background corpora and/or sense inventories (Panchenko et al., 2017; Moro et al., 2014; Miller et al., 2013; Navigli, 2009). Creating these sense inventories is a time consuming and hard task. For WordNet (Miller, 1995), for example, extensive and elaborate work of linguistic and psychological specialists was required for manual acquisition, and still, completeness can not be claimed, e.g. new technical terms such as ‘python’ as a programming language, or named entities such as celebrities or organizations, are missing.
The ThumbnailAnnotator application was developed within a master’s project at the Language Technology group at the University of Hamburg. The core idea is to retrieve ’context sensitive’ thumbnails (small images usually used for previewing purposes) for noun compounds and named entities (called CaptionTokens henceforth), which appear in an input text and disambiguate a word by editing the priority of a provided thumbnail image. Hence, the thumbnail image serves as a description of the intended meaning of the word. For example, let’s consider the sentence “My computer has a mouse and a gaming keyboard”. If a user clicks on the word ‘mouse’, a list of thumbnails of animals and/or technical devices shows up. Now a user can increment or decrement the priority of a thumbnail to induce a visual sense for the word ‘mouse’ in this specific context. Also, various information about the CaptionToken is provided, such as a) the universal dependencies (de Marneffe et al., 2006) that form the context of the CaptionToken, b) the individual tokens that a CaptionToken consists of, and c) a synset from WordNet by automatic Lesk-based WSD (Miller, 1995; Banerjee and Pedersen, 2003). At the current state there is a REST API 1 as well as a WebApp 2 to interact with the ThumbnailAnnotator.
Participants
Florian Schneider