Meeting minutes
<bajones> Hey! It worked!
Logitech MX Ink: Target ray space not aligned with controller grip
<ada> https://
Rik: Shipping a new controller. Question on where tip of pen exists
… Need some mechanism within spec to specify this
Brandon: Target ray space at the tip seems to be correct
… Is there anything that says where that origin should be?
<alcooper> > The targetRaySpace attribute is an XRSpace that has a native origin tracking the position and orientation of the preferred pointing ray of the XRInputSource (along its -Z axis), as defined by the targetRayMode. (https://
Ada: Feels like an interoperability thing. Might be doable with tooling.
<bajones> From the spec (to add to what Alex said): "tracked-pointer indicates that the target ray originates from either a handheld device or other hand-tracking mechanism and represents that the user is using their hands or the held device for pointing. The orientation of the target ray relative to the tracked object MUST follow platform-specific
<bajones> ergonomics guidelines when available. In the absence of platform-specific guidance, the target ray SHOULD point in the same direction as the user’s index finger if it was outstretched."
Ada: Might try attaching different controllers to see what happens.
Rik: Had an older s/w interface that worked. When moved to OpenXR that effort was to not break existing systems
Brandon: Even if this was the native ray, perhaps the Meta browser can "nudge" the ray into the right position
Rik: MetaQuest (MQ) is going to return the target ray space as the OS reports. Model will not be correct, but the ray will be
Brandel: Does the target ray space have a preferred...?
[Editor note: There is a lot of background noise that makes it difficult to hear all conversations. Please feel free to augment these notes]
<bajones> A preferred "roll" is what Brandel asked. (Rotation around the Z axis)
Rik: Discussed Brandel's question, but most of it was missed
Ada: Mentioned about buffers (or buttons) and target ray space. Put it at the tip, but not???
Alcooper: Orientation of space must follow platform ergonomic specifications
<Zakim> bajones, you wanted to talk about standardization of stylus buttons
Ada: Asks Alcooper & Rik to work together
Brandon: Wants to see more than one example (Stylus) before standardizing them
Ada: agrees
<yonet> https://
Add language to account for empty transform in XRReferenceSpaceEvent
<yonet> https://
Rik: Brings up PR at above linkl
Brandon: Thinks text looks fine, wants a group review
Rik: PR (1392) Deals with discontinuity when headset tracking
… THere should be a default pose. There are times when the pose doesn't exist or is incorrect. PR adds the ability to prevent
… transform
Leonard & Rik: Transform includes rotation
Rik: Note that a "no transform" has the value of NULL
Ada: PR with current wording should be approved and merged
Normative compliance needed for calibration data
<ada> https://
Brandon: Lead discussion of above PR
… Feedback from ???. Much of the feedback has been incorporated. This PR deals with one that is more involved.
… The PR works to provide examples of actions that can be taken.
… Difficult to normalitively require these items because devices work differently
… recommends future conversations on a device-by-device basis
<bajones> Feedback was from the PING (Privacy Interest Group)
Ada: Questioned Brandon on details to get a better understanding of requests from PING
Brandon: Provides an example of a headset that provides mm level accuracy compared to one (e.g. Cardboard) that doesn't have any head height tracking
… not all features can be applied to all devices. Much better would be involve the device manufactures to determine the accuracy & privacy needs
Brandon: Extensive description of why having normative text for all of the conditions won't work. Much better would
<Zakim> Brandel, you wanted to discuss "going incognito in the metaverse": https://
Brandon: be to illustrate many of the potential problem cases and potential solutions
Brandel: Fuzzy numbers could definitely cause UX problems, but there may be other means to go incognito - noise data
ALcooper: Suggest other wording that might be more acceptable to PING
Rik: Wonder what specific features in WebXR should be mitigated (according to PING)
Ada: Recommended action: Expand the wording without adding normative text
Leonard: Don't try to cover all privacy cases for all (or long) future time
depth-sensing module
<yonet> Add depthNear and depthFar to XRDepthInformation immersive-web/depth-sensing#43
<cabanier> Add XRView to XRDepthInformation immersive-web/depth-sensing#50
[it's quiet in the room. Hopefully not too quiet.]
cabanier: WebXR Depth-sensing spec has a rigid transform that is used for similar things, but it's inadequate for our field-of-view and position that the depth texture was created at.
cabanier: that's why I added the projection matrix, because it contains more information. When we introduced depth sensing on Quest, they were tied to the time-warped positions on screen, generated at frame rate.
cabanier: that produced way too much overhead on Quest - as well as on Android - but that means that the information isn't frame-locked
henell: We now expose the depth map with its pose and camera model as we get it, and it's asynchronous and therefore not aligned with any other data
henell: that moves the complexity over to leverage that depth buffer to manage occlusion mapping. We are functionally using a shadowmap approach to determine occlusion against that depth map.
henell: It doesn't work if the framerate of the depth map is too low, but when it's close it works, and removes two-thirds of the compute in comparison to the every-frame approach.
henell: However, that means that we can't expose the depth maps in the same way. We need to add something more to make it work
henell: I'm not familiar with how WebXR exposes things like the color information for the pass-through, but some more information has to be made available.
alcooper: expanding on cabanier's explanation: The initial intent of the depth sensing API was to provide those projections appropriate to the eyes of the viewer, but mostly just a coordinate-space transformation (y-flip etc)
<Zakim> bajones, you wanted to ask about the proposed matrix
bajones: clarification on the strategy: Normally with shadowmapping - normally a shadowmap would provide a projection matrix for the cone of vision and something akin to a view-matrix (but not normally inverted)
bajones: You're normally wanting to invert those so you can get whether the light hits things - you can combine matrices together, that's what matrices do
… what do you propose the matrix actually encodes in this case? It sounds like both a projection matrix and a view matrix
… Not sure if you need to do the reverse-projection. Given that this exists in a native code-path it seems like you've got a solution!
henell: in OpenXR we don't multiply the matrices, we provide the pose and a pinhole camera model rather than constructing the matrix. It also means you can construct a point cloud from the data.
henell: Depending on what a user might be wanting to do, you multiply these values through the matrices to work out where they should be. Convert to the correct real world units - it's fairly straightforward
bajones: Based on that description, I'd be uneasy reducing that down to a single matrix that has all that information in a fully usable form. I feel like we might want to provide a brand-new XRView that pairs with that depth information
bajones: That way it gives you more of the useful information to compose what an author might need - it's not quite the pinhole camera that you describe, but I don't think that a one-size-fits-all matrix that does everything that everyone wants.
bajones: We do have things like recommended viewport size which may not be relevant, but maybe it has what we need. I see bialpio nodding in agreement.
cabanier: re: shader complexity - do we have examples we can provide to show how this information is used?
henell: Yes we have samples for Unity and elsewhere - ones for binary yes/no, and for fuzzier approaches that necessitate a pre-processing kernel step
bajones: I imagine this is similar to PCF shadows?
henell: No - we had something like that in the past, but we are preprocessing the depth map and applying a kernel with a simplified representation of that neighborhood (4x4) to do simpler lookups
bajones: Okay, that makes sense - if you have a way to link to any of these examples (or a way we can see them) that would be helpful
henell: iirc there is a downloadable SDK that has the information. People finding it would need to get that SDK, but I can link to that and point to where in the bundle the relevant data is
[bialpio sounds acceptable]
<henell> I was wrong, you can find the sample online here:
<henell> https://
bialpio: The original intent was to encode orientation, which phones did not initially encode appropriately. We agree that we need to expose more data and the question is how.
bialpio: The initial approach was to link that data in an XRView, but the problem is that we'd have to start synthesizing fake views. The spec doesn't currently say much about views that are not meant to be rendered to
<henell> The occlusion shader is here:
bialpio: I'd prefer to expose this extra information like that, but if it's not preferred, we may have to sever the link between this data and an XRView
bialpio: I'm not generally opposed to that, but resolving in that way would be a breaking change
bialpio: It may take some time to create a new version of an API leveraging this approach
<Zakim> alcooper, you wanted to mention breaking change
alcooper: We'd like to avoid breaking changes, as some people have shipped use of the current API. There _is_ a path forward to support the existing approach, albeit at some performance cost. Chrome will need to continue supporting the existing model.
alcooper: I do like the idea of supporting the different XRViews, but we don't currently have a way to indicate that these are not views for rendering to.
alcooper: We have discussed ways of indicating that in some V2 approach
[intermission about meatspace logistics]
<Zakim> ada, you wanted to ask about WebXR Samples repo
ada: It would be great to add a relevant sample to the samples repo. We might want to encourage users on to the new method - there wouldn't be a good reason for folks to continue using this in 10 years, for example
alcooper: I'm not opposed to deprecation, but there are metrics we'd need to use to permit actual removal. "Indeterminate" rather than "indefinite" - just that we'd be on the hook for it for some time.
cabanier: It will live for some time - three.js will never adopt this new view, for example.
cabanier: if people want the better-performing one, they will need to do harder work.
alcooper: This also depends on platform, since phones have the depth map exactly aligned with the view, whereas a headset is not providing that data from the same place.
ada: It would be good ot make sure that the conceptual function of these APIs isn't significantly different on different platforms, since that tends to make people index too closely to a given platform or form factor.
<Zakim> bajones, you wanted to suggest that the view not be exposed through the typical getViewerPose() method
cabanier: It will work on all platforms, just not as well.
bajones: re: new XRViews - I don't think you'd expose the views _as_ getViewerPose-vended ones
bajones: It seems like you'd get depth from a specific "viewer view". with the `getDepthInformation` view from the _viewer_ view
bajones: But that does run into the backwards compatibility that we've covered - another alternative would be "getTheDepthViewForTheViewerView"
bajones: (name TBD) - then, whether this has to go into this complicated transformation, if you're sending in the indirected depth view, you can just send it as it is by the system
bajones: We may need to come up with a better flow than that, but it might solve some of those problems because the view has all of the camera information and a signal of intent.
cabanier: Why wouldn't we just put the XRView in the XRDepth information object?
bajones: You absolutely could, but it would be easy to overlook data being vended in that way
bajones: It's a little harder to avoid if you have to get that in advance. That said, it's a clunky API. It might not be worth pushing the author to jump through those hoops
[Coffee logistics intermezzo]
bialpio: It sounds like we're leaning toward exposing the projectionmatrix - that seems like we need to include the near and far, since they're implicit in such a matrix?
bialpio: As well as aligning the depth buffer with the viewer view - this was a feature that privacy review had input on
henell: The camera extrinsics have an identity implication?
alcooper: Something we believe mitigates this is making sure that the data being shipped is limited to things that the user themselves are aware of, so no visual data that's not presented to them is used.
break per power outage
<bajones> For anyone still in IRC, it sounds like the TPAC venue has suffered a power outage. Unsure when they'll be back online.
<bajones> There is no longer a zoom link on the W3C's page for this meeting.
<Yonet> Oh
<Yonet> It should be the same link.
<Yonet> Do you not have that.
<ada> It's the same as this morning
<Yonet> Anyone else is having an issue?
<bajones> I got it off the page earlier. Looking for it in my browser history
<Yonet> Smart
New module proposal: WebXR Body Tracking Module
<cabanier> https://
cabanier: Quest browser shipped body tracking earlier this year, just exposing what OpenXR did
… left/right parts have different rotations, causing confusion
… had internal talks which revealed OpenXR confusion as well, since then there's been some more clarification
billorr: This is based on a meta OpenXR vendor extension. Joint orientation is mostly arbitrary and could be changed.
… Still trying to standardize on OpenXR side
… Different conventions for orientations. Animation/rigging might have joint orientations that can just be copy/pasted to mirror the animation
… This is the convention that Meta Extension uses for skeleton.
… Quaternion for left elbow is the same as the right to mirror it.
… Alternative is X goes along Joint or something similar, but this gives opposite rotation when mirroring
… Meta extension makes it easier for 3D-modelers, but this is still arbitrary
cabanier: Confirms that OpenXR extension is still a bit up in the air, but WebXR has the option to pick, since translation would be easy enough to pick
… spec already implies this due to disucssion about where the thumb is.
… any opposition to mirrored approach?
Leonard: Does this match OpenXR where there is overlap?
billorr: Matches current meta vendor extension, resolution on openxr still pending
Leonard: If OpenXR goes differently, what needs to happen?
cabanier: UAs need to do a little bit of extra work to mirror
… we'd potentially diverge from what OpenXR does and stick with our decision
bajones: This wouldn't be the first place that OpenXR/WebXR diverge (whether for historical reasons or due to requirements of the web)
… Does this effect how meshes are skinned?
… If OpenXR decides one way and WebXR another, does that mean I can't share a mesh between these two symptoms?
… [seems like yes]
… Bigger win if we can share assets
bajones: Would hate to see a split ecosystem where WebXR requires a conversion process
… Only way to avoid this would be to stop until OpenXR is done, which seems unfortunate.
… Is there a sense of timeline for this agreement?
Leonard: Reasonable question, no answer today, but may be able to answer by next meeting
Leonard: Unnecessary divergence would be good to avoid
bajones: If Khronos feels this will wrap up in a few months, delaying probably worth it.
… If timeline is more uncertain or ~8 months+ out, feels a shame to force implementers to twiddle thumb
billorr: At least in normal pipeline for application, apps don't necessarily target a specific skeleton, so it inherently includes some retargeting in the pipeline
… This could be adapted for OpenXR vs. WebXR
… Standardization is probably too far out to wait on
… Not sure if other vendors with body tracking extensions are involved with W3C, but may want to check with them
bajones: If always retargeting the skeleton is a fact of life, this mitigates the issue
NeilT4: seems a shame to not align on this
Leonard: May just need more discussion amongst these groups
cabanier: Maybe ByteDance?
trevorPicoXR: Unsure
bajones: Unsure for Google's side as well
yonet3: Microsoft has this, but hasn't been involved with IWWG
… (recently)
<Zakim> ada, you wanted to state that they are for different purposes
ada: WebXR isn't the web version of OpenXR, so whatever we pick should be whatever works best for Web Community.
… Maybe don't overindex on what OpenXR would do. But rather investigate what other stores are doing to make there be a simpler integration with what's available within the community.
… eg Ready Player Me. Doesn't matter so much if this API isn't a close match to what's there for hands
… Primary use case is to pipe data over websockets so others can see what user is doing
… So should do whatever is best to enable those usecases
… while also keeping fuzzing, etc. from earlier in mind
NeilT4: +1 Shouldn't be WebXR follows OpenXR but rather the driver being what the Web does
… Maybe this is a case where WebXR helps drive OpenXR
… Is there a formal liason?
Brandel: Not really
Leonard: Mostly an individualized way
NeilT4: Metaverse standards forum is another mechanism that we could help with this
<Zakim> Brandel, you wanted to raise https://
Brandel: iOS has an AR Skeleton 3D, but I'm not sure of what/why but presumably it has it's own schema
… intend to investigate and report back about what/why decisions were made there as well
trevorPicoXR: Sounds similar to controllers being flipped. Can we just follow standards used for controller orientation?
cabanier: Controllers behave slightly differently, so they are just floating in space (same with hand)
trevorPicoXR: So shouldn't this just be the same for the skeleton?
cabanier: Developers assumed it would work the way that hands worked, but they work differently.
… Not sure if they applied to a skeleton or just used hand models
… Body model returns slightly nicer hands
… so skinned hands were broken
… body tracking gives emulated hands holding controllers if you pick them up, and hands you just drop hands (at least at the time)
cabanier: Why is OpenXR taking a long time?
billorr: Trying to merge body and hand into one joint tracking API which complicated things
… not quite discussing joint orientations yet
… just API surface for getting data out
… mostly just a process issue, not yet any disagreements about orientation
… different vendors are very similar, but each vendor slightly prefers their own
Yonet: Sounds like some research to be done, should we revisit in 2 weeks?
Leonard: sent an email, will address when it comes back to us
Cabanier: Seems like no strong opinions, and maybe okay if XR rigging is always just needed anyway when downloading a model from the web
… Okay to wait a couple weeks, but not years
<yonet3> https://
Configure Background of model (transparency?)
Brandel: the idea with model is that in vision os, a. model is a stereoscopic entity that lives in 3d space
… it's tracked and resides in a portal
… it lives there but not visible behind it
… you can change the background color of it and change the environment map
… there's a hope that it can be dynamically scaled so the object fits in the bounding box
… aka the volume
… most people use an orbit mode. In this mode, people can rotate and control the orientation
… at that point, the model should be set to the bounding sphere so it can't clip outside of it
… it should never show outside that boundary
… you can dynamically rescale browser windows that is not visible to the js context
… truescale is not passed on so they would get bigger.
… another desirable thing is that the model can be extracted from the platform
… at which point it's lit by the environment
… it's tbd how we would expose this
… if the model has an animation, then it would be good that it plays
… there can be multiple tracks but for mvp, one animation track would suffice
… given that this changes the bounds, it's ok that this introduces clipping
… the box is calculated at the first frame and doesn't update dynamically
… one of the question is how this composited onto the page
… animation tracks is something we can do later
… image based lights are enough
… scene graph inspection is something we can add later
bajones: I missed part of the presentation. great visuals btw
… you showed the teapot coming out
… is that a proposal or visualization?
Brandel: I think it's interesting but I'm ok postponing it
bajones: I suspect that this will be a lot more complicated
… this could hold up things
… that is the one part of your presentation that gave me pause
Brandel: this is where picture in picture could be used
… since they're doing the same thing there
… if we get arbitrary light and animated track, you'd be surprised what you can get away with
… we don't offer shear as a capability
Leonard: do you have a protoype that has it into the browser?
Brandel: this is just sample in javascript
bajones: to clarify, this demo is in three.js?
Brandel: yes
Leonard: so this is a visual demo what you are talking about?
yonet3: do you get the lighting from the environment
Brandel: I don't have an answer to that yet
… let's think about what we do with that
Leonard: do you have a time schedule to work on the specification?
brandel: Lazlo and Rik offered to help
… now that we have more editors, I'm hoping to make more progress
… Samsung has pursued interest
… as has Meta
Leonard: does that mean it's ready at the end of the year?
… right now it's a CG. When will it be brought into the w3c?
Brandel: I updated the proposal to my state of thinking
… I have little experience with the standards process
… given the novelty, we should get a couple of implementations
Leonard: so you're talking around the first half of next year
Brandel: I'd like to have a lot defined
… if other people are working on it, we'll get more feedback
Leonard: so other people should wait until there's more feedback
cabanier: we're interested in implementing this and will start working on it next year
… it's TBD when we can ship it or how
… while prototyping we can provide feedback which will feed into the spec
ada: this group will incubate it, but it won't be speficied here
… hopefully some people from this group can join whatwg
… apple, samsung, meta and hopefully google as well will support it
MikeW: there's a lot of interest in HDR lately, have we considered how we render hdr content in this
… is this out of scope?
Brandel: it's important that lighting can be specified in hdr
… I do think that we will touch on hdr
… but it likely won't apply to the rendered content
ada: video and image can already be hdr
<yonet> https://
Brandel: (??) we'd like to consume hdr content but it's not clear if it needs to be called out for rendering
sql: when you make a model element detached, couldn't you make any element detached?
… so can we do this to other elements as well?
Brandel: there's a picture in picture element, and it has an approach for arbitrary positioning
NeilT4: is there anything you need from the format on what you're trying to achieve?
… in order to make your life easier
Brandel: we would like to use realitykit
… we use pbr standards such as materialx
… for security
… I think there will be other materials
… usd has a very advanced system
… we should leave that complexity off the table
… people should use content that is web friendly
… the alliance of openusd is going to work on this
… as we go into more sophisticated use case, we'd like to go deeper in use case
… we should avoid scene graph inspection and manipulation
NeilT4: gltf uses materialx
… would the current implementation allow gltf?
Brandel: usually w3c standards don't specify/mandate a format
… obj is likely inadequate
… an actual 2024 quality mode is a better fit for people to pursue
NeilT4: do we narrow down the implementation scope?
… does every implementation support every format?
Brandel: no.
NeilT4: is there a subset that everyone supports?
Brandel: I don't have an answer to that
bajones: w3c doesn't specify formats that is true
… there are media formats
… but in many cases, it's not a desired state or by design
… you know very well that we should lock it down to a single format
… (that I think we should....)
… having people having to create multiple formats is not a good approach
Brandel: I understand that position
Brandel: if someone has a strong opinion, I can take that back for further discussion
bajones: are you saying that if we demand it, you will change your mind?
Brandel: yes, if someone has a strong demand, I can bring that back
bajones: demand is a strong work. any format that we rely on needs to be well specified to be considered
… determining the format should be well supported by multiple implementations
Brandel: yes, that is a definite expectation
… yes, the alliance of openusd will work on that
<cwilso> +1
NeilT4: if you're trying to get portability, demanding usd and gltf support would get you 80% of all models
<cwilso> +1
cwilso: told me how to use scribe
… the point of having web standards is interopability, wants to underscore that if chrome and safari support different types that is not interopability
… the format we expect to support on web needs to be carefully understood
… not dependent on a single implementation
… websql had this problem as sql dialects are not translatable so we want to avoid that issue
… for image tag in practice, jpeg is functionally required but not png or others
… ideally we have a single path to having something working everywhere
neilt: might be premature to select any one format at the moment
<Zakim> Leonard, you wanted to talk about scene graph & IBL
Brandel: the expectation for certain file formats, is that having requirements for the model tag it can bring requirements to file formats
bajones: speaking from gltf, gltf structure positions it well for the model tag specific usage
… if we here demands from users for other features we can collaborate with kronos. A good start would be to pick a few features and make these a requirement and that files with these requirements, can be supported.
… Would be interesting to know how USD could support this type of requirement constraints
Leonard: you mentioned several times having image based lighting, is this part of the scenegraph
Brandel: you can squash the lighting into the file but it might need to be an additional asset that is specified separatly and managed by dom interactions or specified by css
… It is better if this is left out of the asset, this could be used for the whole scene or have separate files per asset depending on the demand
… if we support pulling something out into your current environment, your actually lighting should be used
… a lot of marketting sites want dark and sparkly to highlight certain objects
cabanier: did some initial investigation and is a bit worried about implementing usdz rendering as exisiting open source libraries might be hard to support directly in the browser. Gltf support would save implementation time
… having different code paths for multiformat might have issues with animation playback
Brandel: openUSDZ is working on a specification not a renderer
… theres no specific renderers for usdz but render delegates like threejs that can be used to render USDZ
cabanier: at the end of the day authors might see usdz and gltf in the same way
Brandel: agree, often goes back and forth between usdz and gltf formats
cabanier: other people don't have realityKit making supporting usdz complicated
Brandel: I understand the position of folks in the room
cabanier: again, supporting usdz will add a lot to development time
Brandel: there are advantages of usdz that people may want depending on their workflow
cabanier: at some point the future we might want to create a scenegraph out of the file formats, should we support only one file for this. Having different formats will present challenges
Brandel: other fileformats might also be desired like stl, reducing formats down to 1 might not be ideal.
bajones: related to what NeilT4 was saying, its not a great idea to pick a single format that we will use forever and forbid other formats. jpeg was chosen initially but didnt have transparency so png was added in the future
… would be nice to follow a similar pattern and pick one format as the baseline and other browsers can support additional formats
… the most important thing for users is to have a common thing, if the only thing a developer cares about to hit all users, they can just use that format and know it will work. Its not a bad thing to allow other formats outside of that but a good idea to have a common ground.
… gltf is nice as the existing open source renders are more easily availible. Timeline for usdz is less know when it will be better suited. If we could say usdz for web will be completed in a specific timeframe that will help but we don't have a good sense for that.
… we should be focused on something that we know will fullfil the needs we have at the moment. If there are future needs we can move to another format.
alcooper: i want to highlight having independent implementations is useful, usdz doesn't have great support at the moment.
… Usdz is good that it support multiauthor workflows but it not be critical for model element in the near future so it might not be ideal to be part of the standard
Brandel: would be nice to have a hard statement from folks in the group to take back to Apple
cwilso: would be good to have a poll to get this hard statement
<bajones> +1
<alcooper> +1
NeilT4: how should we structure this poll?
cwilso: i suggest the poll should limit future file formats
Brandel: it shouldnt be a clsoed set, it should be web shaped so that it doesn't limit new formats.
<yonet> poll: who would like to see gltf to be the file format as the minumum supported set.
<cwilso> +1
+1
<alcooper> +1
<bajones> +1
<Leo> +1
<NeilT4> +1
<sql> +1
<yonet> +1
<bialpio> +1
<Brandel> 0
<ada> 0
<MikeW> 0
<chanada> 0
0
RESOLUTION: the working group agrees to see GLTF as the minimum supported set.
Cont: discussion on model element
<yonet> https://
Does it make sense to have a 3D model with a transparent background?
Brandel: Whilst we cannot push html elements back, it doesn't make sense that elements sit behind 3d models which they are in front of in stereo perspective.
Is it okay to mandate that a stereo element in a portal can have a fully opaque background?
Leon: when you say an opaque background you mean the piece you see behind the teapot.
… this is effectively a skybox
… it's always the first thing rendered so it's in the rear
Brandel: I don't know if it should be a hemisphere or cube, but what should we do with a texture background?
… Should a textured background be static image render the image to the rectangle on the document
Leonard why can't it be placed at infinitely far away?
Leonard: Oh it's a html background not a 3D background!
Brandel: Most elements have a certain amount of transparency is that something we can or cannot do here?
We need to establish a good story for why a model cannot have a transparent background.
cabanier: you are asking if a CSS background should apply to a model?
… CSS background can do a lot.
Brandel: yes we have just been working with solid colours, but unless you are willing to accept an equirect environment we can't do textures/gradients
… Putting a texture behind+infront of a model is uncomfortable to look at
trevorPicoXR: why is the model behind the page rather than in front?
Brandel: so that the model cannot occlude trusted UI
Leon: It also is fatiguing to the user if every singe element comes out of the page, like old 3D movies
bajones: [joins the queue] this is about if we should allow transparent backgrounds?
bajones: I agree that whilst it is technically feasible it would be very uncomfortable.
bajones: I feel like there is an interesting case when you are showing the model in 2D
<Zakim> ada, you wanted to mention 2d mode
cabanier: transparent colors could be used to see through the page and the world behind it
Brandel: I don't like that
bajones: that could let you produce a winamp skin style elements where you would have no visible edge to a page
Brandel: element full screen could potentially lead to this.
cabanier: Magic leap had transparent web pages with a metatag to allow it
Leon: could you use other models as the background to your scene?
Brandel: it's a valid thing to want to do but there is not a path forward.
<bajones> +1
<yonet> :)
MikeW: I think for a V1 keep it opaque and investigate it later
<Zakim> bajones, you wanted to talk about how CSS ruins everything. :)
ada: We could let users opt in to being 2d only get enhanced transparency capabilities.
bajones: I think it
… 's going to be weird if CSS opacity is used but we can't opt out of that
… there is probably some sense to saying that backgrounds can't be transparent but the whole element is by other means.
… perhaps if we can detect that the element is going to be overlayed.
<Zakim> ada, you wanted to address opacity
cabanier: some css filters introduce stacking contents, maybe we can say if any of the parents introduce a stacking content then render it in 2D.
ada: that feels extremely flaky that it could break a model in a way that would be unexpected and frustrating to developers.
Brandel: I am aware of these problems but I don't what is more important to manage.
cabanier: elements on top should be fine.
Brandel: unless they have backdrop-filter
trevorPicoXR: can vision os already do these things?
Brandel: these questions are mostly trying to resolve fundamental conflicts and establish and appropriate soure of truth.
… e.g. in a z-stacking world where elements can put them selves above or below real 3d objects makes things weird.
cabanier: we should have a meeting with CSS
<yonet> https://
Brandel: image based lighting file discussion
<yonet> ack \
Leon: collection of lights is insufficient
Leon: propose minimum 16 bits per channel images for lighting
bajones: not many ready made solutions in this space, used .ktx files for personal work
bajones: not really a good industry standard right now, agree with Leonard, lead with IBL
bajones: opt-in, pull in environment lighting -> interesting idea, could lead to contrast issues in the page, maybe better in pull out model mode
bajones: IBL should be consistent across systems to ensure approximately the same across browsers, devices, and environments
<yonet> ack \
Leon: generic, off-white IBL could be used by default
bajones: could also generate a generic image ourselves for use
Brandel: could be either in JSS, in markup, or in CSS
<bajones> CSS is nice specifically because you could have every model on a page automatically inherit an IBL for consistency. Otherwise I share Brandel's unfamiliarity with what that involves.
ada: take to WhatWG and ask for advice, concerned about another format, there could be multiple IBL formats
ada: could be sourceset
@ada: (correction) IBL could be a source element, not sourceset
Leon: attribute is most visible, CSS could set all models the same way
bajones: we probably are not going to make the perfect choice, we should allow for evolution, establish a baseline, allow for change / experimentation as more formats come online
Brandel: prefer the simplest thing
ada: css imageset could be a good fit
https://
bajones: it would be good to do some basic research amongst common vendors in the space, see what is used to make an informed decision
Specify the view on model contents
<yonet> https://
Brandel: discussion on issue to specify a matrix on the model / scene
bajones: in general, great idea, maybe limit to XRRigidTransform instead, not sure about shears
Brandel: should have a way to reject invalid matrices which wouldn't work in a head tracked environment is desirable
bajones: XRRigidTransform might be too rigid
ada: dommatrix is nice because it can be easily configured from css
<bajones> SceneTransform?
Brandel: may need to change name, open question, to provide context
Exposing computed boundingBox information
<yonet> https://
Brandel: discussion around bounding box for a model: center + extent
bajones: GLTF extension gives a bounding box around the entire scene
bajones: should be the bounding box of the bind pose maybe instead of the first frame of the animation? different discussion, but likely easiest to generate based off the bind pose
Managing animation(s)
<yonet> https://
Brandel: discussion on multiple animation tracks and durations
Leon: core GLTF may have to play multiple animations simultaneously, maybe better to defer decision initially
Leon: static models have zero duration, animated models have infinite duration, multiple animations with different durations and staggered starts have which duration?
Brandel: multiple animations should have playback rates and should be able to compute durations like a media track
Leon: better to wait on thinking about this
bajones / Leon: discussion on multiple animations, does not require simultaneous playback
trevorPicoXR / Brandel: extended explanation on animations and meta information
<yonet> https://