Frequently Asked Questions
Avatar SDK allows users to automatically generate recognizable 3D avatars based on their photos. You can check our demo to try our technology on your photos.
The product can be used as a cloud service or also can be applied to compute avatars on the client’s side.
It is possible to get a free trial for 2 weeks on the website. To start a trial you will be asked to create an account and add a payment source.
If you plan to create a Head 1.2 avatar (a head with a bust and a hairstyle generated from a photo) with our cloud service, then the Plus subscription will work best for you. Head 2.0 avatars (replaceable hairstyle, a head can be attached to your full body model) , FitPerson and MetaPerson avatars can be created on the Pro subscription. The latter also has an option to generate avatars locally on a client device, without sending images to our cloud. In addition, the Pro subscription has a lower per-avatar cost (25,000 avatars per month with a cost of $0.03 per avatar). If you are targeting a higher number of avatars per month, please contact us to discuss a custom pricing plan with an even lower cost per avatar.
See our pricing page for details.
After getting a trial you should choose which Avatar SDK product you want to use: the cloud service with REST API, native SDK, an executable file to run in your cloud, the Unity or UE plugin. If you want to use the REST API, you can find all the necessary information and samples at https://api.avatarsdk.com/. If your project is based on Unity or Unreal Engine, or you plan to use the native libraries or the executable, you can download it from your developer dashboard, the developer documentation can be located at https://docs.avatarsdk.com/.
We recommend trying Avatar SDK with a desktop/laptop browser. If you want to try Avatar SDK on a mobile phone, you can use the applications for iOS and Android. Also you can look at RemoteFace, our virtual camera for video conferencing that shows your avatar driven by camera and lipsync instead of the usual video feed.
Avatar SDK has five main types of avatars: Face avatar, Head 1.2, Head 2.0, FitPerson, and MetaPerson. More information about each type can be found on https://avatarsdk.com/avatars/. The latter three are available only on the Pro subscription. If you need avatars with busts, you can use Head 1.2 or Head 2.0 pipelines. In comparison with Head 1.2 pipeline Head 2.0 has detachable haircuts. If you need to get only a head you should choose Face pipeline (a bald head with wigs) or Head 2.0 (has generated haircuts and wigs also). If you are interested in a full body avatar you can use version for UMA 2 plugin, FitPerson or MetaPerson pipeline. The main difference between FitPerson and MetaPerson is in body shape. In the case of MetaPerson fixed shape is used, so different avatars have the same body. In the case of FitPerson body is always unique, you can change its parameters before creating an avatar.
You can see our pricing on the Pricing page. You choose the plan depending on the number of avatars you need to compute per month and what avatar pipeline you are interested in. Please note, there are annual and monthly options. For both options, annual or monthly, your card will be charged for payment once a month at the billing date. It is possible to cancel your subscription at any time in your profile or schedule your account cancellation during the trial on the billing page.
You can schedule your account cancellation on the billing page in your profile. Trial and general subscriptions can be cancelled at any time, and will remain active till the end of the current billing period. Our support will be happy to help you with subscription management, please send your request to support@avatarsdk.com from the email associated with your Avatar SDK account.
Yes, we have an offline solution of Avatar SDK. It is available on the Enterprise plan, you can send an inquiry by clicking the “Request a quote” on the Pricing page. Also, we recommend looking at the Local Compute version that can be used as an alternative.
There are monthly quotas included in your subscription. Our billing system will charge you extra for the avatars created above the quota. You can check the details for a specific subscription plan on the Pricing page or in your Profile after registration.
By default, SDK generates the avatar in maximum possible resolution. Almost all our pipelines also support several levels of detail. You can find more information here.
Local Compute version executes deep learning models directly on the client device, and they are quite heavy. Cloud SDK is lightweight, so we recommend using it if you want to build a small-sized application. You can also find here advice on how to decrease the application size for the Unity Local Compute plugin.
We support gltf, glb, fbx, obj, ply. But not all of these formats are supported in all SDK modifications (REST API, Unity plugin, native library, etc) for all avatar pipelines.
You can find the newer versions of SDK and their documentation here
Our avatars are compatible with 3rd party tools for lipsync, you can find more details here.
Cloud SDK is really lightweight and works on all devices, even in WebGL. Local Compute avatar generation currently works on Windows, Ubuntu, Android, and iOS.
Avatar SDK contains a solution to generate rigged full-body avatars from images. You can find more details in our blog. This functionality is currently available in all our plugins, you can refer to the curl sample on how to reconstruct a full-body model in the FBX format or to this Unity sample scene.
Moreover, we provide an "Avatar SDK for UMA" extension for the Avatar SDK Unity Cloud plugin. This extension allows for generating avatars compatible with the Unity Multipurpose Avatars (UMA 2) plugin. You can find more details here.
All these options are available on the Pro subscription plan or higher.
Moreover, we provide an "Avatar SDK for UMA" extension for the Avatar SDK Unity Cloud plugin. This extension allows for generating avatars compatible with the Unity Multipurpose Avatars (UMA 2) plugin. You can find more details here.
All these options are available on the Pro subscription plan or higher.
If you need to attach our avatars to different bodies (like the seamless attachment of avatars to your body models with texture blending), please contact us at support@avatarsdk.com.
The Local Compute requires internet connection to check if your account is allowed to generate avatars and to report the number of calculated avatars to our system without uploading images or any other data.
Our SDK only generates rigged avatars, it does not include the tools for generation animations. You have to use a 3rd party lipsync, text-to-speech, or face tracking tools. E.g., we recommend using Oculus Lipsync SDK, Speech Graphics software or Salsa for lip-sync. You can find tutorials about their use with our avatars here.
You can look at the sample of such integration in the RemoteFace application.
Yes, it is possible. We have a special type of avatars (“static”) that do not contain internal meshes and can be 3d printed. You need to call Avatar SDK API to calculate these avatars with the "static" subtype. It will allow you to get the model almost ready for 3d printing.
In another way, you can also try to use any other pipelines or subtypes, but you should post-process these models to prepare them for 3d printing and remove unneeded parts. You can use any 3d editor for this task.
Face Capture SDK is a beta version now and it is applied only as a native SDK for Windows and iOS. If you want to get a trial send us a request to support@avatarsdk.com. Please note that Face Capture SDK is available only for Plus or Pro subscribers.
Our focus is to create recognizable avatars for our users. Therefore we do not have a lot of modifications. You can change eye color, hair color, haircuts and teeth. For a full list of modifications please see https://api.avatarsdk.com/#avatar-modifications.
Yes, Avatar SDK provides a set of outfits for FitPerson and MetaPerson avatars. Each outfit consists of mesh, texture, body visibility mask, roughness, metalness, and normals map. Samples for the REST API are located here.
You can add such resources on your own in two ways. 1. Take advantage of avatar fixed topology. Please note that only two pipelines have this feature (Face and Head 2.0 for the head/mobile subtype). 2. Get facial landmarks and place necessary assets accordingly. If you want Avatar SDK to take care of reshaping your assets for each avatar shape, you can import them into Avatar SDK. The details for such a project are available on request.
We have three sets of artificial haircuts which are modified during the avatar computation process to match each particular avatar head geometry: base set (3 male and 3 female haircuts), facegen set (39 haircuts in total, some of them are unisex), and plus set (50 haircuts in total, some of them are unisex).
Also, there is a generated haircut for each avatar (for Head, FitPerson, and MetaPerson pipelines). This haircut is unique and generated for each particular avatar based on the source image.
These sets are available on different plans and applicable to different avatars' pipelines. More information about it here.
To use different haircuts you should specify all of them in the request for avatar generation. Then you should download them with the avatar as separate meshes, it allows you to change the haircut.
Our Head 2.0 avatars have fixed head topology, so it is possible to combine them with a body model. We recommend taking a look at our FitPerson or MetaPerson avatars generated with the body_0.3 pipeline. If you need our head avatars with your full body model, we can create a custom pipeline for you. We will need to look at your body model to provide you with a timeline and costs for such a project.
Unfortunately no, we use separate meshes to give the possibility to change the haircut (excluding Head 1.2 pipeline in which a bust and a haircut are undetachable). But you can try our Webdemo that allows downloading an avatar and hairstyle in one mesh for Head 2.0 avatars.
Fixed topology means that heads (not bust) have a constant number of vertices and the same indices for them. E.g. in the case of adding your own landmarks, you can find the required vertices on one model and use them as landmarks, because their index will be the same for all avatars.