Vignette is a face tracking software for characters using osu!framework.

Overview


Discord GitHub Super-Linter Total Download Count

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is:

  • Made with osu!framework, the game framework that powers osu!lazer, the next iteration of osu!.
  • Open source, from the very core.
  • Always evolving - Vignette improves every update, and it tries to know you better too, literally.

Running

We provide releases from GitHub Releases and also from Visual Studio App Center. Vignette releases builds for a select few people before we create a release here, so pay attention.

You can also run Vignette by cloning the repository and running this command in your terminal.

dotnet run --project Vignette.Desktop

Developing

Please make sure you meet the prerequisites:

Contributing

The style guide is defined in the .editorconfig at the root of this repository and it will be picked up in intellisense by capable editors. Please follow the provided style for consistency.

License

Vignette is Copyright © 2020 Ayane Satomi and the Vignette Authors, licensed under GNU General Public License v3.0 with SDK exception. For the full license text please see the LICENSE file in this repository. Live2D however is also additionally under another license which can be found here: Live2D Open Software License.

Commercial Use and Support

While Vignette is GPL-3.0, We do not provide commercial support, there is nothing stopping you from using it commercially but if you want proper dedicated support from the Vignette engineers, we highly recommend the Enterprise tier on our Open Collective.

Comments
  • Refactor User Interface

    Refactor User Interface

    First and foremost, this is the awaited UI refresh which now sports a sidebar instead of a full screen menu. This also sports updated styling on several components and updates osu!framework and Fluent System Icons. Backdrops (backgrounds) get a significant update as well now allowing video and images as a target.

    Under the hood, I have refactored themeing and keybind management (UI is again to follow). Themes can now be edited on the fly but only the export button works. Applying live will follow. I've also laid down the foundation to avatar, recognition, and camera settings but only as hollow controls that don't do anything.

    priority:high area:user-interface 
    opened by LeNitrous 18
  • Refactor Vignette.Camera

    Refactor Vignette.Camera

    This PR fixes issue #234.

    The previous solution that I've implemented is simply not adding a duplicate item in the FluentDropdown, and warning about it with a console write statement. image image

    Now, the solution is indexing the friendly names so that all options pop up. We're now faced with a "can't open camera by index" bug.

    opened by Speykious 9
  • Allow osu!framework to not block compositing

    Allow osu!framework to not block compositing

    Desktop effects are killed globally when vignette is running; Some parts like disabling decorations are fine, but transparency, wobbly windows, smooth animations for actions, etc are all disabled as long as Vignette is running.

    proposal 
    opened by Martmists-GH 8
  • [NeedHelp]It crashed

    [NeedHelp]It crashed

    It crashed the first time I open it. I'm using windows7,service pack 1 dotnet x64 5.0.11.30524

    It happened like this in most cases QQ截图20220109124725

    And sometimes like this QQ截图20220109125148

    As far as I know,no logs/crash reports or dumps are created :( Can U help?

    invalid:wont-fix 
    opened by huzpsb 7
  • Vignette bundles the dotnet runtime

    Vignette bundles the dotnet runtime

    It seems the last issue went missing so I'm re-adding it.

    Reasons to bundle:

    • No need for end user to install it

    Reasons not to bundle:

    • User likely already has dotnet installed
    • Installer or install script can install it if missing
    • Prevent duplication of dependencies
    • Allow package manager (or user) to update dotnet with important fixes without the need for a new Vignette release
    • Some systems may need a custom patch to dotnet, which a bundled runtime would overwrite
    invalid:wont-fix 
    opened by Martmists-GH 6
  • Evaluate CNTK or Tensorflow for Tracking Backend

    Evaluate CNTK or Tensorflow for Tracking Backend

    Unfortunately, our Tracking Backend, which is FaceRecognitionDotNet, which uses DLib and OpenCV, didn't turn out as performant as expected. The delta is too high to make a significant data, and the models currently perform poorly. In light of that, I will have to make a backend we can control directly instead of relying on others' work which we're not sure that has any quality.

    Right now we're looking at CNTK and Tensorflow. While CNTK is from Microsoft, there is more laywork on Tensorflow, so we'll have to decide on this.

    proposal priority:high 
    opened by sr229 6
  • Use FFmpeg instead of EmguCV

    Use FFmpeg instead of EmguCV

    Currently, EmguCV is being used only to handle webcam input. We've had various problems with runtimes not being in the right place and cameras not being detected.

    Thus I propose that we use FFmpeg for that task. I think that it will be much easier to deal with as we can just use it as a system-installed binary. Not to mention that the library is LGPL which is just perfect for our use-case.

    priority:medium area:recognition 
    opened by Speykious 5
  • Lag Compensation for Prediction Data to Live2D

    Lag Compensation for Prediction Data to Live2D

    As part of #28, we have discussed how raw data would result on jittery rough data, even if the neural network used is theoretically as precise as a human eye predicting the facial movements of the subject. To compensate for jittery input, we will implement a sort of lag-compensation algorithm.

    Background

    John Carmack's work with Latency Mitigation for Virtual Reality Devices (source) explains that the physical movement from the user's head up to the eyes is critical to the experience. While the document is designed mainly for virtual reality, one can argue the methodologies used to provide a seamless experience for virtual reality can be applied for a face tracking application, as face tracking like HMDs, are also very demanding "human-in-the-loop" interfaces.

    Byeong-Doo Choi, et al.'s work with frame interpolation using a novel algorithm for motion prediction would enhance a target video's temporal resolution, by using Adaptive OBMC. Such frame interpolation techniques according to the paper has been proven to give better results than the current algorithms used for frame interpolation in the market.

    Strategy

    As stated on the background, there are many strategies we can perform lag compensation for such raw jittery input from prediction data from the neural network, it is limited to these two strategies:

    Frame Interpolation by Motion Prediction

    Byeong Doo-Choi, et al. achieves frame interpolation by the following:

    First, we propose the bilateral motion estimation scheme to obtain the motion field of an interpolated frame without yielding the hole and overlapping problems. Then, we partition a frame into several object regions by clustering motion vectors. We apply the variable-size block MC (VS-BMC) algorithm to object boundaries in order to reconstruct edge information with a higher quality. Finally, we use the adaptive overlapped block MC (OBMC), which adjusts the coefficients of overlapped windows based on the reliabilities of neighboring motion vectors. The adaptive OBMC (AOBMC) can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking

    According to their experiments, such method would produce better image quality for the interpolated frames, which is helpful for prediction in our neural network, however it comes with a cost of having to process the video at runtime, as the experiment is only done on pre-rendered video frames already.

    View Bypass/Time Warping

    John Carmack's work with reducing input latency for VR HMDs suggests a multitude of methods, one of them is View Bypass - a method achieved by getting a newer sampling of the input.

    To achieve this, the input should be sampled once but can be used by both the simulation and the rendering task, thus reducing the latency for such. However, the input and the game thread must run in parallel and the programmer must be careful not to reference the game state otherwise it would cause a race condition.

    Another method mentioned by Carmack is Time Warping, which he states that:

    After drawing a frame with the best information at your disposal, possibly with bypassed view parameters, instead of displaying it directly, fetch the latest user input, generate updated view parameters, and calculate a transformation that warps the rendered image into a position that approximates where it would be with the updated parameters. Using that transform, warp the rendered image into an updated form on screen that reflects the new input. If there are two dimensional overlays present on the screen that need to remain fixed, they must be drawn or composited in after the warp operation, to prevent them from incorrectly moving as the view parameters change.

    There are different methods of warping which is forward warping and reverse warping, and such warping methods can be used along with View Bypassing. However, the increased complexity for lag compensation of doing input with the main loop concurrently is possible as the input loop will be independent of the game state entirely.

    Conclusion

    Such strategies mentioned would allow us to have smoother experience, however, based on my personal analysis, I found that Carmack's solutions would be more feasible for a project of our scale. We simply don't have the team and the technical resources to do from-camera video interpolation as it would be computationally expensive to be implemented with minimal overhead.

    area:documentation proposal priority:high 
    opened by sr229 5
  • Hook up Tracking Worker to Live2D

    Hook up Tracking Worker to Live2D

    As the final task for Milestone 1, we're going to hook up the tracking worker to Live2D and see if we can spot some bugs before we turn in on our release.

    proposal priority:high 
    opened by sr229 5
  • User Inteface

    User Inteface

    We want to customize the Layout, and to do that we need to do the following:

    • Make the Live2D a draggable component
    • Custom Backgrounds (Green Screen default, white default background, or Image).
    • Persist this layout into a format (YAML, perhaps?)

    Todo

    • [ ] Draggable and resizable Live2D container.
    • [ ] Backgrounds support (White background, Green background, user-defined).

    Essentially, since we're going to have a layout similar to this:

    image

    proposal priority:high 
    opened by sr229 5
  • Extension System

    Extension System

    Discussed in https://github.com/vignetteapp/vignette/discussions/216

    Originally posted by sr229 May 9, 2021 This has been requested by the community; however, this is kinda low priority as we focus most on the core components. The way this works is the following:

    • Extensions can expose their settings in MainMenu.
    • They will be strictly be conformant to the o!f model to properly load. This is considered "bare minimum" for what people requires to make an extension.
    • They will be packaged as either a .dll or a .nupkg which the program can "extract" or "compile" into a DLL, something we can do once we have a better idea with how to dynamically load assemblies.

    Anyone can propose a better design here since this is a RFC, we appreciate alternative approaches for this.

    priority:high 
    opened by sr229 4
  • UI controls, sprites, containers, etc as a Nuget package.

    UI controls, sprites, containers, etc as a Nuget package.

    It would be a nice idea if you could make a seperate library that includes all the UI controls, themeable sprite, containers, etc as a nuget package. It could allow other developers to integrate it to their projects and have access to a nice suite of UI controls + other stuff instead of writing them from scratch.

    priority:high area:user-interface 
    opened by Whatareyoulaughingat 6
  • VRM Support

    VRM Support

    Here's a little backlog while we're working on the rendering/scene/model API for the extensions. Since this is a reference implementation for all 3D/2D model support extensions, VRM is going to be our flagship extension and will serve as a extension reference for model support.

    References

    proposal priority:high area-extensions 
    opened by sr229 0
  • Steamworks API integration

    Steamworks API integration

    As part of #251, we might want to include Steamworks API just in case people might have a use for it on our Steam releases. It would be optional and will be hidden under a build flag.

    proposal priority:medium 
    opened by sr229 2
  • First time user experience (OOBE)

    First time user experience (OOBE)

    Design specifications are now released for the first time user experience. This will guide them to set up the bare essentials so they can get up and running quickly.

    priority:medium area:user-interface 
    opened by sr229 0
  • Internalization Support (i18n)

    Internalization Support (i18n)

    We'll have to support multiple languages. A good start is looking at Crowdin as a source. We'll support languages by demand but for starters I think we'll support English, Japanese, and Chinese (Simplified and Traditional) given we have people proficient in those languages.

    As for implementation, That would be the second part of investigation.

    good first issue priority:low 
    opened by LeNitrous 13
  • Documentation Tasks

    Documentation Tasks

    We'll have to document more significant parts at some point. We'd want contributors to have an idea how everything works in the back-end after all.

    For now we can direct them to osu!framework's Getting Started wiki pages.

    area:documentation good first issue priority:low 
    opened by LeNitrous 0
Releases(2021.1102.2)
Owner
Vignette
The open source VTuber Toolkit. Made with 💖.
Vignette
clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

README clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation CVPR 2021 Authors: Suprosanna Shit and Johannes C. Paetzo

110 Dec 29, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 05, 2023
DualGAN-tensorflow: tensorflow implementation of DualGAN

ICCV paper of DualGAN DualGAN: unsupervised dual learning for image-to-image translation please cite the paper, if the codes has been used for your re

Jack Yi 252 Nov 10, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.

Demonstration of OpenVINO techniques - Model-division and a simplest-way to support custom layers Description: Model Optimizer in Intel(r) OpenVINO(tm

Yasunori Shimura 12 Nov 09, 2022
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".

Mugs: A Multi-Granular Self-Supervised Learning Framework This is a PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-

Sea AI Lab 62 Nov 08, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page] Anh-Quan Cao, Rao

298 Jan 08, 2023
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation

Multipath RefineNet A MATLAB based framework for semantic image segmentation and general dense prediction tasks on images. This is the source code for

Guosheng Lin 575 Dec 06, 2022
kullanışlı ve işinizi kolaylaştıracak bir araç

Hey merhaba! işte çok sorulan sorularının cevabı ve sorunlarının çözümü; Soru= İçinde var denilen birçok şeyi göremiyorum bunun sebebi nedir? Cevap= B

Sexettin 16 Dec 17, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 144 Jan 04, 2023
Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging Minimal implementation and experiments of "No-Transaction Band N

19 Jan 03, 2023
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 69 Nov 28, 2022
Blender Add-on that sets a Material's Base Color to one of Pantone's Colors of the Year

Blender PCOY (Pantone Color of the Year) MCMC (Mid-Century Modern Colors) HG71 (House & Garden Colors 1971) Blender Add-ons That Assign a Custom Color

Don Schnitzius 15 Nov 20, 2022
I will implement Fastai in each projects present in this repository.

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH The repository contains a list of the projects which I have worked on while reading the book Deep Lea

Thinam Tamang 43 Dec 20, 2022
A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics, sequence features, and user profiles.

CCasGNN A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics,

5 Apr 29, 2022