The editing details:
I exported the rylo footage to an ipad, and exported that as 360 video to google drive (3840×2160 91,436 kbps, 23.98 fps. 1 GB). Rick converted the 2D camera footage from .raw to .mov files (3840×2160 434,655 kbps, 23.98 fps. 10GB total for the 4 files).
In Adobe Premiere Pro CC 2019 i started with the rylo’s equirectangular footage, and put the 2D shots in layers above. For each of these, i used the “VR plane to sphere” effect and used these steps to dial it in:
Finally, separately, I added “Video Effects > Adjust > Levels” to try and match original footage saturation and contrast. (first Gamma, then RGB Black Input or RGB white Output)
… (i tried adding “Color Correction > Color Balance (HLS)” effect, but it glitched everything out. erg. Some of the tutorials i found suggest against any normal effects, because they may mess with the seam or the whole VR distortion)
The compression details:
The first export (with no “injection”. You used to have to download a free tool from Youtube to “inject” meta data so their machines would know to interpret it as 360. I guess check marking “VR” in the Adobe export is enough now)
(h.264, 12 MBPS, 162mb, 23.976fps – 3840×2160)
Then I exported the exact same file, but boosted the MB per second from the default 10 to the max 30. (I think this is the key to why our initial rylo and ricoh 360 videos looked so aweful).
(h.264, 30mbps, 399mb, 23.976fps – 3840×2160)
Then I switched to h.265 compression (because some website recommended it), but it limited the max mbps to 25mbps. I started this export around 6pm, and it completed around 2am. 8 hours. yikes!
(h.265 (HEVC): 25,530 kbps, 399mb, 23.98 fps – 7168×4032)
Finally i imported the .mp4 from the last step and composited the Tvori footage over it.(also took 8 hours).
(h.265 (HEVC): 30,999 kbps, 406mb, 23.98fps – 7168×4032)
A little extra about VR animation piece(s):
There is this great program “Tvori” which lets you animate 3D objects extremely quickly in virtual reality. I think of it as the ideal way to make “animatics”.
For the “fire animation” piece, i used the maximum Tvori settings for exporting a 360 video (3840×1920, 873 kbps, 24fps. 1.15 MB), and Color Key’d the grey background out in Premiere (plus a little “Ultra Key” to get a band of grey gradient) (plus a “VR Rotate Sphere” effect to move it around into place). It looks like I missed a small white dot (hot spot) up near the top. oops.
For the “buildings animation” piece, I had a ton of problems keying. I made an inverted normals sphere with green texture in Maya, then brought it into Tvori to try and hide the gradient of the sky. But the sphere caught the light/shadows (and wasn’t smooth. my bad?). So I gave up on the whole 360 video export (because I’d already animated each thing. I couldn’t turn off “the floor” in tvori, so it was making the bottom half of the frame a much lighter green. I would need to delete all the animation to move everything up away from the floor, and reanimate. pssh.). Instead I exported the highest res 2D video I could from Tvori (3840×2160, 1225 kbps, 24fps. 4.39 MB). Took that into After Effects to key out the faded floor (using “Color Range” and eye dropping until i got it all. plus a Solid layer below it with the same color as my green sphere sky). Exported that (3840×2160, 3858 kbps, 24fps. 14.9 MB) and took it back into Premiere to comp like the other 2D footage.
Final thoughts:
There are lot of rough edges here, but I think it’s a huge success. We can get pretty decent 360 video quality out of what we have, and putting 3D objects into it possible. woo hoo!
Sorry for the quirks in there (audio overlaps. when i looped the 360 background video to fill time while the 2D footage played out). The second “talking spot” really doesn’t line up well with the building awning. hmmf. Need to be careful about what size frame we shoot to insert, and what items are at it’s edges (like, i think the dead grass and grean leaves edges look fine. but the hard lines of the building really stick out as bad).
I need to find some way to crop the 2D footage to make all this easier. But taking it over to AE to mask things out sounds like a pain. Like the “window” footage. if that was matted to just be the square window area, i think it’d be much easier to fit into place. trying to match the distortion one the lift side, roof slats, pillars on each side, and the window middle bar on the right: was impossible.
+ I”m not clear why the Rylo cam (the 360 background footage) appears to wiggle slightly several times. maybe tripod needed more weight on it? other ideas?
(if you just look at the seam for one of the HD 2d video clip overlays the whole time, you can really see it).
+ Also, Might be good to try and match the lower contrast grey of the rylo? I think i had to dim the HD footage because it was right on the edge of blowing-out white. Which is maybe how we tend to shoot HD video. Not sure how to dial this in. (fstops will mess it all up yeah? and might have polarizing filters to consider, depending on light? ugh)
+ might be good to aim the left edge of the rylo at minimal-detail part of scene (as it gets blurry from there forward). And/or stage things so nobody ever walks past left side of it.
+ not sure how to best shoot replacements for the above below points. (like, cover the tripod). We could just tilt the camera back 90 degrees to get sky, but moving clouds will be a nightmare to comp in. Maybe we could build a giant tripod to get the camera in roughly the same spot aiming straight down (and hopefully rotate it to avoid shadows). or a gib arm? i dunno. hmmf.
+ maybe we should settle on ideal height to center of 360 lens and do our best to measure and match that for everything else.
anywho. Just wanted to record some details before I forget. If you have any questions, please feel free to ask.
]]>(this is basically cliff notes from these two links:
EQ: Warm a Voice and Improve Clarity
How to Remove Echo in Adobe Audition
)
Core takeaway: don’t get lost in sliders. setup your whole game system asap, then go back and tweak. (a working system is inspiring on it’s own. like a baby.).
-break-
7. imports vs. prefabs (reusable objects)
8. audio, particles, etc.
9. lighting vs. rendering (postfx)
10. cameras (+ timeline + cinemachine = cut scenes)
11. VR options (+ AR? + Vuforia?)
12. THE FUTURE (proBuilder, SRP, shader networks, post fx volumes)
settings: build (pick your platform first), Player (+ graphics, input)
2. Canvas
Three possible spaces (screen space – overlay, camera. world space). Scalar (constant – pixel, physical. scale with screen).
Rect Transform (+ Rect tool). Creating canvas adds Event System (with input module. to change out for VR).
UI primitives offer lots of interaction with other systems without programming (note: special var selection for “Dynamic” updating)
UI shader renders back to front (repeatedly). Layering UI can exponential the draw calls.
make font big, then scale down (?).
+ 3D (object) text is separate hack for avoiding Canvas. Shows through.
3. Shaders
A shader(process) tells a GameObject (“mesh renderer”) how to apply a material (ammunition).
Material: first set the shader type to see relevant options. Then set rendering mode.
(+ Metallic v Specular Setup? specular lets you mess with highlight and rim)
(+ secondary maps for detail up close)
Shader: default has all this extra crap. Can write your own – in it’s own programming space (ugh).
(can dissolve based on slider, or world coordinates – from top down show a second material based on a 3rd material as mask )
2 sides to any shader: Vertex (shape of surface. geometry) verses Pixel (“Fragment” between. color)
cool tricks: Geodesic animated, wind sway
FUTURE: Shader Networks coming (ala Unreal, Maya)
+ Textures: set to minimum resolution! (power of 2) (sprite sheet, auto cut)
4. Physics
Rigid bodies (useGravity, isKinematic). (Configurable joints) (colliders vs. triggers – how access touch point(s))
– colliders are additive (includes the colliders on children).
– don’t use mesh collider! (draw example of convex calculations?).
– try to keep mass around 1 (or things blow out)
– joints mess with imported skeleton animation system (?mystery?).
5. Animation vs. Animator (mechanim)
…tion: Make Clips. hit record and go. “k” to keyframe.
Call event during animation: is calling a “public” function (from a script on object with the animator controller)
…tor: Set Default state. Make transition (& adjust blend). Conditions (use Parameters). Layers build on each other. “a” frames all.
– view while playing to see which animation is active.
– Create Blend Tree, add Motion (clips). Choose 1D (linear transitions) or 2D(blend based on position)
– If using “Humanoid” Rig – Reach for object with .SetIKPosition() (Base Layer: checkmark “IK Pass”)
6. Profiler
click anywhere to pause. deep dive.
– Don’t do string compares (like “.findByName()”) don’t instantiate. fear UI rendering. try to use 1 active light and bake rest into lightmaps.
… talk about Forward v deferred rendering?
– deeper dive with Frame Debugger (enable)
– shallower dive with “Stats” in Game window.
7. Imports vs. Prefabs (reusable objects)
turn down image size! images go to Power Of 2 (set images up that way, less overhead for mobile)
drag gameobject into prefab folder to make it reusable. edit in project, or drag to hierarchy and click “apply”
– code: add scene objects as variables when instantiating (+ look into Pooling instead).
+Free people: makeHuman or Adobe Fuse, Mixamo animations, free 3D objects: SketchFab, Poly, Remix3D. (honor the license)
8. Audio, Particles, Line Renderer, etc.
Listener on (every) camera. Audio Source(s) play Clips.
-setup temp audio from start (ultimately 10% processor, usually. at least)
particles have similar concerns as UI (layered transparency).
one line renderer per object (billboard). trace xyz of hands or where rayCast is hitting.
9. Rendering (Lighting vs. Post Processing)
open the lighting window. skybox.
bake lightmap(s?)
import Post Processing Stack asset (or wait for next version?)
cool trick: paint on lightmaps
10. Cameras (+ Timeline + Cinemachine = cut scenes)
– Clear Flags: ditch the skybox (see Lighting for more skybox)
– Culling Mask: don’t draw things in this camera
– Depth : layering multiple cameras (larger # is on top)
– Viewport rect: crop view (W,H: 0 to 1 are percentages). quick map overlap.
– RenderTexture: send camera view (to a quad).
Timeline is a sequencer (for any component). Can wait (loop) until something happens in game (change enemy activity, animatons, camera angles).
Cinemachine is a camera crew (look at things, follow them. Main camera can transition through your virtual cameras using Timeline)
– Cinemachine can record video (Adam).(…No gameplay allowed?).
+ note: you can screen record Game window, then export to gif (good for VR demo. twitter).
+ make movies instead of apps? (Otoy OctaneRender?)
11. VR options (and AR? Vuforia?)
… future talk?
SteamVR (OpenVR): Import Asset. Improves Renderer. messes with physics? many quick examples (instant teleporting. bow).
Oculus: Quickstart /w Touch (don’t trust frame rate in Stats? bug where it reports tracking rate?). Oculus Start program.
I don’t like VRTK (locked into their system). but updates often, many others like it.
12. THE FUTURE (proBuilder, SRP, shader networks, post fx volumes).
GDC talk
Unity blog, plus R&D Labs(photogrammetry delighting)
UNITE Conference (ask an expert)
A) The headset is supported natively in Unity now (go to Project Settings and enable VR support. i think it’s hidden away under “XR” at the moment. has been changing location a lot the past few months). Press play, and you can look around with your Rift goggles. You can use mouse and keyboard, or gamepad, for input just as you would in any other Unity3D project. And you can develop for SteamVR and controls will work on both Vive and Rift input (can’t publish that to Oculus store though. and some mappings are weird. Vive’s Grip maps to Y or B button, while nothing maps to A or X)
If you want to release your app officially through Oculus, you’ll need to create a developer account on their Developer website, and setup an App ID through their Dashboard website.
B) To get deeper, and enable Touch using the same ghost hands* you see in the Oculus setup sequence, you’ll need to download and install 3 packages, and then tweak 3 components.
(* note: these are called your “Avatar,” along with a ghostly floating head that others can see in networked experiences. You can customize the appearance and color of these elements over in the Oculus Home app).
Download 3 packages from the Oculus Unity Downloads page
– Import “Utilities for Unity” package (has everything for interface)
and maybe:
– Import Platform (has oculus community stuff, security stuff, etc. all the core non-gameplay stuff basically)
– Import AvatarSDK (this is how you get the standard looking hand presence. Also has social scene sample with VOiP)
… or you can search for “Oculus Integration” on the Unity Asset store (it has all these and more).
Things to tweak:
1- Disable or delete the existing Main Camera so it isn’t competing for control. CenterEyeAnchor will be the camera used for your goggles.
2- Drag the simple prefab OVRCameraRig into scene (I prefer this, because i’m writing my own controls for movement). But if you want to start moving with controllers immediately, drag OVRPlayerController prefab to scene (this includes control scripts, and many things as children, including the “OVRCameraRig”).
3- drag LocalAvatar (from Project window OvrAvatar/Content/Prefabs/ ) to the TrackingSpace (in Hierarchy window childed under:
OVRPlayerController/OVRCameraRig/ ) Note: the Oculus Avatar is only allowed to do the basic button reactions (point, thumbs up, fist). For example, you can't make it change shape/finger-placement to appear to be grabbing a special shape. To do ANYTHING other than the basic functions, you will need to create your own hands with custom animations.
Resource for programming:
unity-ovrinput
: basic code:
public Transform ball;
public OVRInput.Controller c;
void Update () {
float f = OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, c); //*
ball.localPosition = new Vector3(ball.position.x, ball.position.y, f);
}
//*or you could replace "c" with "OVRInput.Controller.LTouch" to hard code it
~ note: if using OVRPlayerController “ForwardDirection should house the body geometry which will be seen by the player (contains the matrix which motor control bases it direction on).” … i just wrote this down because it seemed like it’d be important later. not actually using yet.
…
And I’ll post separately about how to set up basic grabbing function (basically you drag a script to the grabber, and a script to everything you want grabbable, then tweak some parts of the components). And i have a short list of other basic functions (pressing buttons, painting on a texture, teleporting, magnifying, twirling with thumbstick, etc.) that I hope to post short entries on. someday. when we have another project.
SUMMARY:
basically, we’re going to fake up some pull down menus.
We make buttons on the base layer that have a state for each possible answer. We make a layer for each question with all these answers as pull-down menu items. When you click one of these pull down items, it changes the state of that button on base layer and hides layer. Then we evaluate the states of all these buttons to secretly select a hidden button. The “quiz” type for the slide is “select one” so it is ignoring all our tricks and just evaluating these hidden buttons.
So if all the visible buttons were set to the right states, the slide will evaluate to true.
Optionally, we can make little feedback icons (red X’s) to the left of each question, so user can get an idea which parts they screwed up. We hide these Xs when the timeline starts, and when their button is clicked.
PROCESS:
A. Create buttons
1. Add text like “select” (or “click me”)
2. Name the layer for each button (something like “q1”)
(because they’ll be in a long pull down list of objects later on. So the default name like “rectangle 26” will be confusing)
3. Add button States for each potential answer.
(These states store what the user selected. Which state is active will be changed through a trigger on a layer , which we will setup later).
tip: once you create the first selection button, cntrl-drag it to quick duplicate. But this will also duplicate states (which can’t be renamed. You have to delete and remake them to have different State Name).
4. Create 2 dumb buttons that are not visible on the main stage (off to the left for example). Name one “Yes” and the other “No” (or whatever you like). Name their layers too. These are what the slide is actually checking to evalute right or wrong final answer.
B. Convert slide to Quiz
1. Click [Insert]”Convert to Freeform” button on ribbon. Choose the “Pick One” quiz type.
2. Change [Design] “Attempts” from “1” to “unlimited”
(on the ribbon’s Scoring area). (this is so the users can’t move on until they answer all buttons correctly. otherwise, they’ll advance whether right or wrong – and won’t able to re-take this quiz at all).
3. Set answer list form to have your dumb buttons as options. Select “yes” as right answer.
C. Create wrong-answer feedback (Optional?)
1. Click [Insert] “Shape” on ribbon. Scroll down to “equation shapes” and click the multiplication X symbol. Drag on stage to make it about size of question line. Click [home] “Shape Fill” to make it red.
2. Duplicate and place next to each question. (Strongly suggest putting it on left edge, so it won’t be obscured by the quiz’s final feedback popup). Use [home] Arrange>Align>… to “Distribute Vertically” and “Align Left”
3. Name the layer for each X (something like “x1”)
4. Add trigger to hide X when timeline starts. make sure this uses the SLIDE’s timeline, not the OBJECT’s timeline (set 5th field “Object:” at bottom to slide. the screenshot at right has this field set wrong!).
4. Add trigger to hide X when its Choose Button is clicked. (for quiz retakes)
tip: In the Trigger area, you can copy a trigger (using 2 papers icon), select another object,
and paste (using clipboard icon). This will auto-replace the trigger’s “On Object:” (time saver).
D. Create global triggers (for final evaluations).
1. Create a huge “yes” trigger which changes the state of “yes” to selected (when conditions are met). Then enter conditional statements to check if each select button state is equal ( == ) to the state you want. The conditional “List:” defaults to “Variables”, but you change that radio button to “Shapes” to see the States you created earlier.
Make sure these conditionals are all true at the same time by choosing “AND” for all of them (default).
2. Also create a huge “No” trigger. This time, the conditionals should check each button for not-equal ( != ) to right answer, and use “OR” on all of them (so that if ANY is not-equal, this dumb button will be selected).
(note: if you were to only set up the “yes” trigger, no dumb button would be selected until user got every single answer correct. So the quiz would report back “no answers were selected”, instead of reporting “wrong”)
tip: you could also evalute multiple text entry fields instead of button states. Create them with [insert] Controls > Text Entry. Then in these big conditional-based triggers, it’s important to
choose “= = (ignore case)”
3. Add triggers to: Change state of each X to “normal,” when user clicks submit button. (this is what shows the X).
Must add a conditional statement that checks if the corresponding choose button is not-equal to the right answer State.
4. Move the “submit interaction” trigger to the bottom of the list of Triggers, using the down arrow icon in the Triggers sidebar. This ensures the slide will only evaluate our 2 hidden buttons after they’ve been set by the other large evaluation triggers.
E. Create layers
1. Make a layer for each choose button.
2. Name each layer to match (if button was “q5” you might name the matching layer “a5”)
3. Add “Show Layer” trigger to each choose button, and set it to corresponding layer. (this is a good place to use copy/paste trigger icons to save time)
F. Create text fields
1. Make a text field for each answer (on each layer). These should have the same wording that users will see in the choose button States (on base layer).
tip: Text fields don’t have backgrounds by default, but you can use [Home]”shape fill” on the ribbon to set a background color similar to the original choose button.
2. Add a State to each answer text field, of type “hover.” Change its shape fill to be a lighter color.
tip: if you want different text fields to have same style (fill color, hover fill color, text font and color), you can select the text field you like, then use [home]”format painter” to apply these attributes to the next object you click.
tip: After a Hover state is setup, you can just cntrl-drag the first text field and edit the visible text. The hover state will just change the background color without affecting visible text.
tip: You can Alt-Drag to move objects without snapping. This is easiest way to create overlap, so they seem to have one background.
3. Add a “Change State Of” trigger to set state of corresponding choose button (to each text field)
This is where it comes in handy to have given all the choose buttons unique names. Make sure you are setting a State that matches the visible text. You can copy and paste this trigger with the icons in the Triggers area – but you will need to edit the pasted trigger to match the visible text.
4. Add a trigger that hides this layer when clicked (to each text field). Again, you can copy/paste this triggers for speed.
tip : you can save a lot of time by copying your text fields from first layer and pasting into all the other layers. then you just need to edit their content text, and which choose button’s state they affect.
HOTKEYS:__________________________________
* cntrl-drag: will duplicate an object.
* alt-drag: will disable snapping (so hold alt for smoother resizing)
NOTE:
This is a summary of this 30 minute tutorial video (from a wacky guy with a sort of annoying marketing tone): https://vimeo.com/52342544
Now that we finally had the latest hardware, it would stop on the third step of 1.6.x software install and ask me to uninstall the old runtime. Even after I uninstalled it through the control panel. So i found these tips:
1) This thread describes messing with registry settings, but I wasn’t bold enough to try (you can really mess up a computer this way).
https://forums.oculus.com/vip/discussion/21677/please-uninstall-your-previous-runtime-to-continue
2) This video suggested deleting several more files from around the hard drive, and although I followed it all – it never fixed the problem.
https://www.youtube.com/watch?v=2WWrQncMNqs
I wondered if antivirus software was the problem (on the dev site, 1.6 sdk installer notes you need to disable real-time virus scanning to install). Figured I probably just needed to reinstall windows altogether. but then:
3) Finally, deleting this one registry is what did the trick:
(HKLM local…) \Software\Wow6432Node\Oculus VR, LLC\Oculus Runtime
a tip from this page :
https://www.reddit.com/r/oculus/comments/4t39wy/please_uninstall_your_previous_runtime_to_continue/
p.s. I also downloaded the “check your system” app on a whim, and it claims our Joffrey machine’s processor and lack of USB 3.0 ports will be a problem. but. turns out you can get in there and run apps. it just puts up a warning about our hardware being crap when you are in any menu.
(+ I hear a consistent crackle in the audio, which I’d guess is due to exceeding the USB 2.0 bandwith. just guessing)
]]>I’d suggest setting Scroll View’s Scroll Rect (Script) “Movement Tyoe: clamped” so it won’t bounce past it’s limits, and unchecking “Horizontal” so there won’t be any sideways scrollbar.
+ Tip: You can temporarily disable “Mask (script)” in the viewport if you want to see your entire Content object. (i do this to see entire tall text field. so i can drag it’s blue dot edges to be just longer than the text, and less wide for scrollbar)
+Tip: You can’t delete the horizontal scrollbar completely (because ScrollView is constantly deciding whether or not to hide it), but you can expand the vertical scroll bar and the Viewport to fill down to the bottom of the ScrollView.
2b) If your Content is text and you plan to change it during runtime, add a component : Layout : Content Size Fitter (Script), then set “Vertical Fit: Preferred Size.” This will make it auto size the Rect Trasform to fit the text (so you’ll avoid excessdead space at end of short texts, or cutting off mid scroll because you put a lot more text in).
3) You can auto-arrange a bunch of items inside this scrolling field. Set your new Content to be a panel with a certain size, then add the component : Layout > Vertical Layout Group (or Grid Layout, etc.). Now you can add multiple children to this panel and they will be auto-sized and spread out (they can be of any type: toggle, button, etc. Usually you make one, duplicate it, and go back to tweak each instance)
* Note: for some reason my text keeps getting slightly cut off slightly at left. If I indent Content or Viewport using it’s Rect Transform (PosX:1) the change just gets overwritten back to zero after I press play. huh. bug?
Final thought:
As of August 1, 2016 this is a strangely obscure trick in Unity (or maybe Google just isn’t showing me relevant results when i search “Unity ScrollView.” Seems like there are no tutorials or explanations or manual entries on this object, or they’re ridiculously overlong)
Thanks to this youtube video for help:
https://www.youtube.com/watch?v=DgNN_VJHnK8
note: you can save your photoscan project as a .psx file at any point.
note: you can use Workflow > Batch Process to setup all the following steps in a row (nice for overnight processing)
-1. overview of menu bar icons.
– There is a button to “add photos” at top left of Workspace pane. It is next to an “add chunk” button which i’ve never used (i believe a “chunk” is a way to store multiple sets of photos without them affecting each other). You can use “Workflow > add photos…” instead of this button. (this is usually the first action i take)
– the main “Model” workspace has some useful icons across the top.
* 3 dotted line shapes you can use to select cloud points (for deletion, later)
* 2 blue arrow shapes to manipulate your bounding box area (tells program to ignore what is outside the box)
* 1 pyramid with blue arrows (turn object to be right side up)
* 2 iconsfor deleting and cropping which i never use.
* then we get 6 icons that reflect the steps you are going through. these are way to change what is show: first pass alignment points, dense cloud points, mesh as vertex-shaded | solid | or wireframe, and the final triangle shows you the mesh textured.
* then we see a camera icon, which will show you the origin of each photo you took (so you can delete and realign any that failed)
* the final icon is a quick view reset. i use this all the time.
– the photo workspace has a different set of icons.
* 4 ways to select areas on a photo (starting with dotted line box), for masking.
* 3 icons that control how your selection will affect the current photo mask.
* 5 things i never use (rotate, zoom, brightness)
* the final 3 icons control: shading (of points, after alightment -i believe), toggling display of alignment points (so you can mask them out), and reset view
0. Load Photos
I’d recommend looking through your photos before getting started. If you took less than 30 photos, the software might struggle (i believe you want 60% overlap between photos at least). If any photos are crazy blurred, I’d delete them. I’ve kept some slightly out of focus photos just because they are the only angle available (better than gaping holes in final mesh). But as you’ll find in a lot of complicated computer magic, it’s best to start with the highest quality (because headaches will just add up in each step, and you might waste more time trying to compensate for a bad start than you would have wasted by just going back and getting high quality photos to start with)
+ obscure note: “Tools > Camera Calibration” is an option if you have a weird/non-standard lens. But I believe the software can deduce your exact camera model from meta-data within the photo files. I use a Samsumsung Galaxy Note 4.
– in the Workspace you can expand a chunk, expand it’s sub folder, and then double click any photo to see it in the main area. I’d suggest looking through your photos (in some other program that lets you move through them quickly) and masking out any huge problem spots before you start aligning the photos. These would be spots where : someone walked through frame (and isn’t there in any other shots).
The software is smart enough to ignore details in the background/distance that change (smoke, clouds, water), but if something is right in the middle of your core area – for one shot (and you can’t just delete the photo entirely)- it’ll probably help the software to mask that thing out (again, if there’s a lot of these things – you might want to just go back and take new photos)
+ obscure note: if you had a stationary position (like you just turned in place, taking photos) you must move these photos to their own group and set this group as a “camera station” to help with the alignment.
+ Obscure note: the “NA” next to each photo is a code for “not aligned” (you can go back alater and right click to try an align problem photos).
+ Obscure note: in the photo pane, you can choose detail view and delete photos rated under 0.5. (select them and chjose “Estimate Image Quality”)
1. Align Photos (~20 minutes)
– Accuracy: suggest High. This will affect all subsequent steps. (each step lower than high is shrinking each photo by a factor, before considering it. “Highest” actually upscales each photo.)
– Pair preselect: speeds things up. (i think lets the software group correlated photos, so each new photo doesn’t have to be compared to everything processed photo… but it also has something to do with using a quick super low quality pass to guess a which photos are likely overlapping)
– Advanced> Key points: 70,000 is default (sets max number of points of interest to isolate across entire photo set. can set to zero to find as many as possible)
– Advanced> Tie points: 4,000 suggested (sets max number of shared points to find per photo. Set to zero to disable tie point filtering. You can also reduce this later, “Tools > TiePoints > Thin Point Cloud…” to better prepare for dense cloud after alignment is settled.)
note: I boost both of these point counts up when i have more photos than usual.
+ if you’ve set any masks on photos, you can check
Advanced: “Constrain features by mask”: makes sure none of the masked “feature points” are used in the sparse mesh construction or camera alignment. Masked areas are later ignored for dense cloud and texture generation.
note: you can make masks in some other program and import them (as alpha channel, separate image, background ref, etc.). press “esc” to clear mask from a photo.
* After it is done processing:
– Click “Show Cameras” button- if any are way off, delete them. (then you can select them and align them again.(?))
– adjust bounding box around what you want to see moving forward (crop out stuff like messy floor, or scraps of trees above). Best to make this as small as possible (so the maximum amount of your dense cloud is based on things you care about)
– you can also select and delete points directly at this step (if you don’t want them to use in calculations for Dense Mesh). This is a good thing to do for extraneous noise (points floating in the air where you know there is no relevant geometry)
Advanced note: you can use “Edit > Gradual Selection…” to filter the bad projected points based on some zany criteria (like how many photos each point needs to appear on).
Some points are clearly far away from where they should be (“reprojection error” aka false match). If you delete any of these you really should run Tools > Optimize cameras (select all?) to improve camera alignment accuracy.
2. Build Dense Cloud (~3 hours)
– Quality: medium to highest (couple minutes to an hour)
– Advanced: Depth Filtering: A depth map will be made for each photo. this controls how much to ignore fine detail (go “mild” if you require some small details. Go “agressive” if you’d rather smooth out noise)
* After it is done processing:
– select and delete points you don’t want to see included in your final mesh. (all points visible at this step, inside your bounding box, will be used to set vertices in your final mesh!)
Note: you can set a program Preference to save depth maps, which will speed up any future dense point cloud generations.
Advanced note: with “Tools> Dense Cloud > Select Points by Color” you can remove things that clearly aren’t part of your subject
3. Build Mesh (~15 minutes)
– surface type: always go “Arbitrary” (Height Field is only for voxel-like horizontal surfaces. air photos of terain. though it generates much faster).
– source data: always go dense cloud (“sparse cloud” means it would base mesh off initial tiny point cloud used to align photos)
– Face Count: depends on speed and desired detail. note that you can enter a custom amount if you want much more or less than it’s offering. Zoom in on the mesh afterwards and see if it isn’t dense enough (then check your dense cloud. if there aren’t any points in the problem area, you may need to go back and trim your bounding box down so that area will get more attention. or you may need to start over with more photo coverage)
– Interpolation: This controls hole filling. Enabled is best. Disabled means no holes will be filled, and takes longer to calculate (usually looks crummy). Extrapolated means it will try to guess as much as possible (usually leads to bizarre stretchy walls in areas you don’t want)
* After it is done processing:
you can select polygons and delete them.
Advanced note: you can use “Edit > Gradual Selection…” to select floating wisps of garbage (using “connected component size”), or ridiculously over sized polygons (like from extrapolated interpolation, using “Polygon Size”). The “size” is a percentage of entire model size. Also, you can expand selection with pageUp (+shift)
4. Build Texture (~5 minutes)
*if you used any questionable (slightly blurry) photos, you should View the Photos pain, and disable them before starting this calculation.
– Mapping Mode: “Generic” is best (makes no assumptions about subject). “Adaptive orthophoto” mostly values flat planar terrain, but will separate out UVs for vertical portions (good if you have no slopes. just strictly horizontal and vertical surfaces). “Orthophoto” is like an aerial photo (will have very poor texture quality in vertical areas). “Single Camera” will let you project texture across entire mesh from one camera’s perspective (you can pick which camera by name, after choosing this option)
+ “Keep UV” must be used if you want to preserve the UVs you assigned in some other program (when reimporting to slap texture on)! This will also save tons of time if you want to try out different blending modes.
– Blending Mode: Controls how overlapping photo areas are mixed, per pixel.
Use “Mosaic” (averages low frequency photo elements, while using high frequency is taken from most relevant camera)
“Average” evenly mixes everything (likely losing fine details).
Max/Min Intensity will use pixel from whichever overlapping photo has the max/min value for that pixel.
“Disabled” (used most relevant camera for each pixel… tends to feel very rough. no anti aliasing feel.)
– Texture Size: make sure you use Power Of Two dimension (2048 is my max, if heading towards games or online viewing)
– Advanced: Color Correction: this will try to make all photos have same color values, even if brightness changed a lot. Takes a LOT more time. i’d only do this if you really couldn’t deal with results.
5. Export
– you can upload directly to sketchfab if it all went perfectly. convenient!
– I usually save as wavefront OBJ, which generates a mesh fil and a texture file that I can import into many other programs for cleanup or background use.
– Once you lean up the mesh in this file, and/or fix UVs, and then reimport it into PhotoScan to generate a new texture. But only if you didn’t change the orientation of the mesh! If it’s upside down or tilted, you should fix this orientation in PhotoScan before exporting. I’d recommend using Autodesk MeshMixer for cleanup (good tools for fixing holes, and paintbrushes for interactive smoothing while retopologizing. and more).
“Tools > Import > Import Mesh” is how you bring in an external mesh (supports: .stl, .fbx, .obj, and more). Usually you import a mesh so you can project textures onto it from your aligned photos.
note: some people advise creating a new chunk before importing, so you don’t overwrite your existing work (but I usually don’t save after importing, since i’m just there for a one-off texture).
Lingering Questions:
1) you can export points in many formats… so can some other program offer better meshing from dense cloud?
2) how align chunks? (there is a chunk alignment icon i’ve never used)
3) Others have chosed adaptive orthophoto for texture mapping. Should check it’s UVs (mosaic is a mess. so if A.O. isn’t horrible, maybe it’s a better starting point?)
(you can check UVs under tools)
4) you can generate masks from a generated 3D model? so you could start over with exported OBJ and use it to tighen everything up?
5) you can export stiched panoramas from a camera station group? (try it!) (can also export images with lens distortion removed).
6) should you move all un-aligned photos to their own chunk? see if you can align them back in later?
Ref:
this 13m vimeo was very helpful:
(and this 11m follow up helped with cleanup tips in Autodesk MeshMixer https://vimeo.com/123702711 )
– Game Developers Conference
(next year), San Fran. (Huge conference for all the folks making video games world wide. 27,000 people this year. wide variety of cutting edge technology is always on display. This is my number one conference each year.)
– Games 4 Change
June 23,34. NY. (I’m going to this at the end of the month. Great speakers. might be small? )
– Revolutionary Learning
August 17-19. NY. (might be too board game centric? looks smart/useful though. Wass my backup plan this year)
– SIGGraph
July 24-28, LA (in disneyland). (Super technical. but often teases tons of exciting graphics research that never gets released.)
– Serious Play Conference
July 26-28. NC. I’ve gone to this a couple times (at Digipen and Carnegie Mellon), but have zero interest in University of North Carolina.
– UNITE
November 1-3, LA. My gang hit this in Seattle a couple years back. Great talks if you’re working in Unity3D.
– Digital Design & Web Innovation Summit
Sept 15-16, LA. Mark suggested.
– WebVisions
Just happened a couple weeks ago. Right here in Portland. It has shrunk dramatically over the years, but a couple people from our group always seem to go.
– Vision Summit
james went to this in February and everyone got a free HTC Vive system. (but it was well documented online).
– PAXdev
Aug 31+ Seattle. I went to this a couple years back. small 2 day thing (as much about board and card games as video games). Fun, but limited connection to learning. Nice excuse to get into PAX though (they quietly give each attendee to this an option to also attend PAX)
– Oculus Connect
this is usually well documented online (youtube playlist). and had VR livestream of keynote (so why travel?). (I’ve been tempted to go each year though, for the VR thinking/focus/hype).
Google IO and Samsung Dev Days also sound very interesting, but are well documented online (ie, almost every session is now available on youtube. so why travel?)
– An Event Apart
(next year?) mark noted. web design stuff. (stephen also noted. Web design/dev centric)
– ASU GSV Summit
Victor suggested (next year)
++++++++++++++++++++++++++
alan noted:
* International Conference on Interactive Digital Storytelling
November 15th-18th. Institute for Creative Technologies, University of Southern California
* Distance Teaching & Learning Conference
August 9th – 11th. University of Wisconsin – Madison
* Silicon Valley Virtual Reality Conference
April 27th – 29th (Passed). San Jose Convention Center
* IEEE Virtual Reality
March 19th – 23rd (passed). Greenville, SC Hyatt Regency Greenville Hotel