At Nubificus, we are exploring systems software optimizations for deploying lightweight applications in the Cloud and at the Edge. Based on existing open-source tools and frameworks we mix and match application dependencies and tailor the Operating Systems layer to match the applications’ requirements. We are a fully distributed company working from the UK, Greece & Spain.
Does hacking the OS/application stack sound like your kind of project? Do you see yourself change the way users around the world deploy their applications? If the answer is yes, we would love to have a chat and welcome you to our team.
Please send an email to jobs@nubificus.co.uk including:
Make sure to include the job ID in the subject.
In all positions below, we offer:
Nubificus LTD is an equal opportunity employer. We want applicants of diverse background and hire without regard to age, gender, color, religion, national origin, or any other individual characteristic.
We are currently looking for candidates for the following positions:
Responsibilities:
Requirements:
Nice to have:
Location:
Schedule
Responsibilities:
Requirements:
Nice to have:
Location:
Schedule
Responsibilities:
Requirements:
Location:
Schedule
Responsibilities:
Requirements:
Location:
Schedule
Responsibilities:
Requirements:
Location:
Schedule
Internship title: Deploying ML workflows in the Cloud & at the Edge
The Cloud computing paradigm appears ideal to deploy and manage application execution at scale. In order to support the cloud computing execution model at the Edge, devices need to support virtualization and run a full hypervisor stack. At the same time, hardware acceleration offers a high degree of computational throughput in a very small power envelope for a wide range of application domains. Many applications, e.g. Machine Learning, Computer Vision, HPC, rely on hardware accelerators, such as GPUs, FPGAs, NPUs, etc. to increase the amount of data they can process and at the same time reduce their energy footprint compared to traditional CPU-only systems. Recently, with the vast amount of data originating from sensors, IoT devices, Edge nodes, the need to perform intensive computations at the Edge has risen.
To efficiently deploy and manage ML workflows the community is using frameworks such as Kubeflow, Jupyter Notebooks, TensorFlow Training and TensorFlow Serving.
During this internship the student will familiarize themselves with these frameworks and technologies, and will deploy an experimental k8s cluster on a number of Edge devices (low- and high-end) such as NVIDIA Jetson nano, NVIDIA Jetson Xavier and Intel NUCs. Additionally, they will deploy an example ML application and evaluate the performance of training and inference.
The purpose of this internship is to evaluate the feasibility and complexity of Kubernetes and Machine Learning on edge devices and to facilitate k8s bootstrap on Edge devices.
Essential skills:
Comfortable administering Linux systems Comfortable working with containers & orchestration frameworks (such as K8s) Comfortable programming in on of the following: C/Rust/Python/TF
Desirable skills and experience:
Sound knowledge of container orchestration and management Linux kernel, or system-level programming experience
Location:
Schedule
Internship title: Explore TF internal operations to be offloaded to a generic transport mechanism
Short description:
Tensorflow supports offloading computations on hardware accelerators (GPUs, TPUs). In order to use an accelerator, a Tensorflow instance currently needs direct access to the relevant hardware. Although this requirement doesn’t prevent distributed deployments (either by Tensorflow itself or by higher level frameworks like pycharm) in heterogeneous hardware, it limits flexibility of integrating hardware accelerators to virtualized setups where Tensorflow applications are executed, especially when there are latency constraints (ie. AWS Lambda/Firecracker). Adding a generic transport mechanism to Tensorflow’s internal operations could provide the necessary abstraction to efficiently use accelerators in all aforementioned environments.
Responsibilities:
Essential skills:
Desirable skills and experience:
Location:
Schedule