Develop a Camera Web App on Foldables
Objective
Learn how to develop a camera web application that detects partially folded posture and adjusts its layout accordingly to improve user's experience on Samsung foldable devices.
In order to create a camera web app for foldable devices, you will use the following:
- HTML5 and CSS modern features
- WebRTC
- Device Posture API
- Viewport Segment API
Overview
Foldable devices are here! These smartphones allow different form factors with just one device, which can lead to new opportunities for innovation. As developers should be able to detect what is the current posture of the device, the physical position in which a device holds, Samsung is putting an effort within the W3C on making a new web standard called Device Posture API. This API allows web applications to request and be notified of changes of the device posture. Together with the Device Posture API, the Viewport Segment API enables web developers to build layouts that are optimized for the dual-screen form factor of foldables. Web apps can take advantage of the new form factors and improve user's experience.
Set up your environment
You will need the following:
-
Samsung Internet (at least v19.0)
-
Samsung Galaxy Foldable device:
- Galaxy Z Fold2, Z Fold3, or Z Fold4
- Galaxy Z Flip, Z Flip3, or Z Flip4
-
If physical device is not available for testing, use:
Requirements:
- Samsung account
- Java Runtime Environment (JRE) 7 or later with Java Web Start
- Internet environment where port 2600 is available
B. Device Posture API Polyfill
For testing using a non-foldable device, just implement a polyfill to the project.
Start your project
- Open a new browser tab, go to https://glitch.com, and log in.
- Click New Project, then select Import from GitHub.
- Enter the URL of this repository.
Add HTML video markup
In index.html
, you will have two main sections, one is dedicated to the video camera and the other one to the camera controls. You can identify the video section as it contains the class video-container
. Complete the HTML markup by adding the <video>
element, width
, height
, and id
attributes.
<div class="video-container">
<video id="video" width="1280" height="200" >
Video stream not available.
</video>
</div>
Once the video markup is added, the HTML file is complete. Besides this, you can find:
- A
<canvas>
element into which the captured frames are stored and kept hidden, since users don’t need to see it. - An
<img>
element where the pictures will be displayed.
Enable front and back cameras with facingMode
In index.js
, you'll find a startUp()
function. Here is where you:
- Initialize most values.
- Grab element references to use later.
- Set up the camera and image.
Start by setting up the camera. You will have access to both front and back cameras by using facingMode
, a DOMString that indicates the direction on which camera is being used to capture images. It's always a good practice to check if this function is available for use. Add the following lines of code in startUp()
:
let supports = navigator.mediaDevices.getSupportedConstraints();
if (supports["facingMode"] === true) {
flip_button.disabled = false;
}
Link the media stream to the video element
If you are able to use facingMode
, the camera web app will use this option in capture function to detect which is the current camera used and later send it to the flip button. Now, the camera should be activated, insert this block of code after retrieving defaultOpts.video
value using facingMode
:
navigator.mediaDevices
.getUserMedia(defaultsOpts)
.then(_stream => {
stream = _stream;
video.srcObject = stream;
video.play();
})
.catch(error => console.error(error));
In order to get the media stream, you call navigator.mediaDevices.getUserMedia()
and request a video stream which returns a promise. The success callback receives a stream object as input. It is the <video>
element's source to the new stream. Once the stream is linked to the <video>
element, start it playing by calling video.play()
. It's always a good practice to include the error
callback too, just in case the camera is not available or the permissions are denied.
Take a look at the implementations in index.js
At this point, the functionality of the web app is complete. Before moving to the next step, let’s review the rest of the JavaScript code:
- There is an event listener video for the
canplay
event, which will check when the video playback begins. If it's the first time running, it will set upvideo
attributes likewidth
andheight
. - For the snip button, there is an event listener for
click
that will capture the picture. - The flip button will be waiting for a click event to assign the flag about which camera is being used, front or back camera, within the variable
shouldFaceUser
, and initialize the camera again. clearPicture()
creates an image and converts it to a format that will be displayed in the<img>
element.- Finally,
takePicture()
captures the currently displayed video frame, converts it into a PNG file, and displays it in the captured frame box.
Use Device Posture and Viewport Segment APIs
At this point, you should have a working prototype of a camera web app. The video camera should be displayed, and a picture may be taken using the snip button. In addition, it will show a preview of the picture taken through the small preview display. The flip button allows you also to switch between front and back cameras.
Now, it's time to play around the layout of the web app and take advantage of the features available on a foldable device. In order to do that, you will implement the Device Posture API that allows developers to detect what is the current posture of the phone. In order to change the layout when the device is partially folded, the device posture that you will look for is in a form of a book or laptop.
The Viewport Segments API, in the other hand, is an experimental media feature designed to detect if your website is being displayed on a dual-screen device. The media feature for this is called viewport-segments. There are two types: horizontal-viewport-segments and vertical-viewport-segments. You can use these media queries combined with the Device Posture API to also check if the device is currently in a horizontal or vertical orientation within its hinge, which can be really helpful for Samsung devices.
The Galaxy Z Fold devices have a vertical hinge orientation while the Galaxy Z Flip devices have a horizontal hinge. With the Viewport Segment API, you can use horizontal-viewport-segment for Galaxy Z Fold devices and vertical-viewport-segments for Galaxy Z Flip devices, as illustrated on the diagram below.
Apply the following media query in style.css
:
@media (vertical-viewport-segments: 2) and (device-posture: folded) {
body {
display: flex;
flex-flow: column nowrap;
display: block;
}
.camera-controls {
top: env(viewport-segment-top 0 1);
height: env(viewport-segment-height 0 1);
}
.msg {
display: block;
margin: 3em;
}
}
Using modern CSS features like display:flex
, you can change the layout of the body element and the elements with the class video-container
and
camera-controls
when the device is in Flex mode.
Test your app
Whether you test on a real foldable phone or on a remote test lab device, you need to enable the Device Posture API in the latest version of Samsung Internet. To do this, open the browser and type internet://flags in the URL bar and search for either Device Posture API or Screen Fold API, then select Enable.
-
Test in a real device
If you have a real physical device, you can test it directly in Samsung Internet using the URL that glitch provides you by clicking on Show in a new window menu. Just partially fold your phone, and you will see how the layout changes and even discover a hidden message!
-
Use Remote Test Lab
The other option, if you don’t have a physical device, is by using Remote Test Lab. You can choose any Galaxy foldable devices from the list and follow the same instructions as you would with a real device. Just make sure to enable the Device Posture API and have the latest version of Samsung Internet. Use the buttons provided by Remote Test Lab to partially fold your remote Galaxy device.
-
Implement polyfill
Polyfill allows you to emulate behavior on devices that do not have folding capabilities. It helps you visualize how the content responds to the different angle and posture configurations. Just include
sfold-polyfill.js
directly into your code and use the polyfill settings (screenfold-settings.js
) of the web component that will emulate the angle of the device and therefore, it will change its posture.Moreover, add the following code in
index.html
.<head> … <script type='module' defer src="screenfold-settings.js"></script> <script src="sfold-polyfill.js"></script> … </head> <body> <section> <screenfold-settings></screenfold-settings> </section> … </body>
As the current Polyfill has a previous version of API just replace the media query with following situations:
- When testing using a personal computer either a laptop or a desktop, use
@media (screen-fold-posture: laptop)
- When testing using a regular phone that's not a foldable, use
@media (screen-fold-posture: book)
You're done!
Congratulations! You have successfully achieved the goal of this Code Lab. Now, you can create a camera web app that changes its layout when a device is partially folded. If you're having trouble, you may check the complete code here.
Learn more with these resources: