Modern browsers are capable of some amazing things: access to hardware features such as the user’s geolocation, device vibration, and even battery status is already available via easy-to-access APIs. It doesn't end there, either. More APIs are currently being developed that will give web developers even greater access to device hardware. Modern browser vendors are working on ways to use the hardware features of a device to allow developers even greater access to hardware capabilities. Near Field Communication (NFC), ambient light sensors, proximity sensors, accelerometers, and even shape detection are being targeted by some of the amazing APIs currently being developed.
If you're looking to build a powerful PWA that takes advantage of the hardware on a device, things are only going to get better. Native apps have had access to these features for many years, so it's great to see this kind of thing coming to the web.
In this code lab, we are building on top of the project started in the Web Push Notifications
code lab.
If you didn't do it already: Fork and then Clone the following repository: https://github.com/The-Guide/fe-guild-2019-pwa.git
$ git clone https://github.com/[YOUR GITHUB PROFILE]/fe-guild-2019-pwa.git
$ cd fe-guild-2019-pwa
If you want to start directly with Beyond PWAs
checkout the following branch:
$ git checkout pwa-beyound-init
First install the dependencies
$ npm install
Then type in the terminal
$ npm start
and open Chrome at localhost:8080fe-guild-2019-pwa/
In this code lab, we are also using the server so in case you didn't do it already fork and then clone the following repository: https://github.com/The-Guide/fe-guild-2019-pwa-server.git
$ git clone https://github.com/[YOUR GITHUB PROFILE]/fe-guild-2019-pwa-server.git
$ cd fe-guild-2019-pwa-server
Install dependencies
$ npm install
To start the project type in the terminal:
$ npm start
the server will be hosted at localhost:3000
The Media Capture API allows authorized Web applications to access the streams from the device's audio and video capturing interfaces, i.e. to use the data available from the camera and the microphone. The streams exposed by the API can be bound directly to the HTML <audio>
or <video>
elements or read and manipulated in the code, including further more specific processing via Image Capture API, Media Recorder API or Real-Time Communication.
There is also a higher level alternative built-in into mobile operating systems like iOS and Android that doesn't require any JavaScript API - the basic HTML <input type="file" accept="image/*">
element allows launching any application that provides an image file, including the camera.
navigator.mediaDevices.getUserMedia(constraints)
Prompts the user for access to the media interface specified by the constraints
and returns a Promise
that is resolved with the interface's stream handler.
stream.getAudioTracks()
Returns a collection of audio tracks objects being provided by the device's microphone.
stream.getVideoTracks()
Returns a collection of video tracks objects being provided by the device's camera.
mediaElement.srcObject = stream
Sets a stream to be rendered into the provided <audio>
or <video>
DOM element.
Previous version of the standard, supported with vendor prefixes, contained the callback-based getUserMedia
method directly within the navigator
element:
navigator.webkitGetUserMedia(constraints, successCallback, errorCallback)
The Media Capture API
is what we need to capture selfies instead of uploading a picture
index.html
Just above the div#pick-image
<video id="player" autoplay></video>
<canvas id="canvas" width="320px" height="240px"></canvas>
<button
id="capture-btn"
class="mdl-button mdl-js-button mdl-button--raised mdl-button--colored">
Capture
</button>
feed.css
Just after #create-post
#create-post video, #create-post canvas {
width: 512px;
max-width: 100%;
display: none;
margin: auto;
}
#create-post #pick-image, #create-post #location-loader {
display: none;
}
#create-post #capture-btn {
margin: 10px auto;
}
feed.js
Just after the declaration of imagePicker
const imagePickerArea = document.querySelector('#pick-image');
const videoPlayer = document.querySelector('#player');
const canvasElement = document.querySelector('#canvas');
const captureButton = document.querySelector('#capture-btn');
Just after the last variable declaration
const initializeMedia = () => {
if (!('mediaDevices' in navigator)) {
navigator.mediaDevices = {};
}
if (!('getUserMedia' in navigator.mediaDevices)) {
navigator.mediaDevices.getUserMedia = (constraints) => {
const getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
if (!getUserMedia) {
return Promise.reject(new Error('getUserMedia is not implemented!'));
}
return new Promise((resolve, reject) => getUserMedia.call(navigator, constraints, resolve, reject));
};
}
navigator.mediaDevices.getUserMedia({video: { facingMode: 'user'}, audio: false})
.then(stream => {
videoPlayer.srcObject = stream;
videoPlayer.style.display = 'block';
videoPlayer.setAttribute('autoplay', '');
videoPlayer.setAttribute('muted', '');
videoPlayer.setAttribute('playsinline', '');
})
.catch(error => {
console.log(error);
imagePickerArea.style.display = 'block';
});
};
Adapt openCreatePostModal
by replacing the first line with
setTimeout(() => createPostArea.style.transform = 'translateY(0)', 1);
initializeMedia();
Replace closeCreatePostModal
with
const closeCreatePostModal = () => {
imagePickerArea.style.display = 'none';
videoPlayer.style.display = 'none';
canvasElement.style.display = 'none';
captureButton.style.display = 'inline';
if (videoPlayer.srcObject) {
videoPlayer.srcObject.getVideoTracks().forEach(track => track.stop());
}
setTimeout(() => createPostArea.style.transform = 'translateY(100vh)', 1);
};
Add a click
event handler for captureButton
captureButton.addEventListener('click', event => {
canvasElement.style.display = 'block';
videoPlayer.style.display = 'none';
captureButton.style.display = 'none';
const context = canvasElement.getContext('2d');
context.drawImage(
videoPlayer, 0, 0, canvasElement.width, videoPlayer.videoHeight / (videoPlayer.videoWidth / canvasElement.width)
);
videoPlayer.srcObject.getVideoTracks().forEach(track => track.stop());
picture = dataURItoBlob(canvasElement.toDataURL());
});
utility.js
const dataURItoBlob= dataURI => {
const byteString = atob(dataURI.split(',')[1]);
const mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
const ab = new ArrayBuffer(byteString.length);
const ia = new Uint8Array(ab);
for (let i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
const blob = new Blob([ab], {type: mimeString});
return blob;
};
Don't forget to run npm run build
before npm start
in order for the service worker to take into account the latest changes to the files
Testing on mobile requires https
so you need to deploy it to GitHub Pages (have a look at the Introduction to Service Workers step 4 Add to Home Screen if you don't have it set up) so run npm run deploy
The Geolocation API lets authorized Web applications to access the location data provided by the device - obtained using either GPS or from the network environment. Apart from the one-off location query, it also provides a way for the app to be notified about the location changes.
navigator.geolocation.getCurrentPosition(callback)
Runs one-off query for the location with coordinates, accuracy, altitude & speed, if available.
navigator.geolocation.watchPosition(callback)
Sets up observing for location changes, invoking callback for every change.
Let's use the Geolocation API to get our position when we take selfies
index.html
Just below the div#manual-location
<div class="input-section">
<button
id="location-btn"
type="button"
class="mdl-button mdl-js-button mdl-button mdl-button--colored">
Get Location
</button>
<div
id="location-loader"
class="mdl-spinner mdl-js-spinner is-active">
</div>
</div>
feed.css
.mdl-spinner {
margin: auto;
}
feed.js
After the last variable declaration
const locationButton = document.querySelector('#location-btn');
const locationLoader = document.querySelector('#location-loader');
let fetchedLocation = {lat: 0, lng: 0};
Just before initializeMedia
const initializeLocation = () => {
if (!('geolocation' in navigator)) {
locationButton.style.display = 'none';
}
};
Adapt openCreatePostModal by adding the call to initializeLocation();
after the call to initializeMedia();
Adapt closeCreatePostModal by adding the following before the if
block
locationButton.style.display = 'inline';
locationLoader.style.display = 'none';
Add a click
event handler for locationButton
locationButton.addEventListener('click', event => {
if (!('geolocation' in navigator)) {
return;
}
let sawAlert = false;
locationButton.style.display = 'none';
locationLoader.style.display = 'block';
navigator.geolocation.getCurrentPosition(position => {
locationButton.style.display = 'inline';
locationLoader.style.display = 'none';
fetchedLocation = {lat: position.coords.latitude, lng: position.coords.longitude};
const reverseGeocodeService = 'https://nominatim.openstreetmap.org/reverse';
fetch(`${reverseGeocodeService}?format=jsonv2&lat=${fetchedLocation.lat}&lon=${fetchedLocation.lng}`)
.then(response => response.json())
.then(data => {
locationInput.value = `${data.address.country}, ${data.address.state}`;
document.querySelector('#manual-location').classList.add('is-focused');
})
.catch(error => {
console.log(error);
locationButton.style.display = 'inline';
locationLoader.style.display = 'none';
if (!sawAlert) {
alert('Couldn\'t fetch location, please enter manually!');
sawAlert = true;
}
fetchedLocation = {lat: 0, lng: 0};
});
}, error => {
console.log(error);
locationButton.style.display = 'inline';
locationLoader.style.display = 'none';
if (!sawAlert) {
alert('Couldn\'t fetch location, please enter manually!');
sawAlert = true;
}
fetchedLocation = {lat: 0, lng: 0};
}, {timeout: 7000});
});
Don't forget to run npm run build
before npm start
in order for the service worker to take into account the latest changes to the files
Testing on mobile requires https
, so you need to deploy it to GitHub Pages (have a look at the Introduction to Service Workers step 4 Add to Home Screen if you don't have it set up) so run npm run deploy
The Web Streams API lets you stream content to your users. For example, say you want to display an image on a web page. Without streaming, the following steps need to take place in the browser:
All these steps are critical to displaying an image, but why should you wait for the entire image to be downloaded before you can start these steps? What if you could process the data piece by piece as it was downloaded instead of waiting for the entire image to download? Without streaming, you need to wait for the entire contents of the download to complete before you can return a response. But using streaming you can return the results of the download and process it piece by piece, allowing you to render something onto the screen even sooner. The great thing about this is that you can process the result in parallel with fetching—much better.
The Web Bluetooth API is a low-level API allowing Web applications to access the services exposed by nearby Bluetooth-enabled devices.
navigator.bluetooth.requestDevice(serviceFilters)
Scans for the device in range supporting the requested services. Returns a Promise
.
device.gatt.connect()
Returns a Promise
resolved with the server object providing access to the services available on the device.
server.getPrimaryService(name)
Returns a Promise resolved with the particular Bluetooth service on the device.
service.getCharacteristic(name)
Returns a Promise
resolved with the GATT characteristic object.
service.getCharacteristic(name)
Returns a Promise
resolved with the GATT characteristic object.
characteristic.readValue()
Returns a Promise
resolved with a raw value from the GATT characteristic.
<p>
<button class="btn btn-lg btn-default" onclick="readBatteryLevel()">Read Bluetooth device's<br>battery level</button>
</p>
<p id="target"></p>
const readBatteryLevel = () => {
const $target = document.getElementById('target');
if (!('bluetooth' in navigator)) {
$target.innerText = 'Bluetooth API not supported.';
return;
}
navigator.bluetooth.requestDevice({
filters: [{
services: ['battery_service']
}]
})
.then(device => device.gatt.connect())
.then(server => server.getPrimaryService('battery_service'))
.then(service => service.getCharacteristic('battery_level'))
.then(characteristic => characteristic.readValue())
.then((value) => {
$target.innerHTML = 'Battery percentage is ' + value.getUint8(0) + '.';
})
.catch((error) => {
$target.innerText = error;
});
};
There were several attempts to establish the universal, multi-platform, asynchronous way of data exchange from the Web applications to native apps or nother Web apps and up to date no standardized solution was conceived.
There are, however, some basic workarounds for sending data to nother applications. Native applications can register handlers to receive data from the Web apps using special URL prefixes (although differences exist between iOS and Android). There are also third-party non-standard services that coordinate sharing data between Web applications.
Google Chrome 18 implemented the Web Intents experimental API. It was conceptually based on Android Intents system. The apps interested in receiving data were required to be registered in Chrome Web Store and declare the intent support in the manifest file. The apps sending the data were able to invoke the Intent of the particular type and let the system handle the selection of the target application and its proper invocation. The API was removed in Chrome 24 because of various interoperability and usability issues. No other vendor implemented Web Intents.
The newest implementation, Web Share API, as of September 2017 available in Chrome on Android, is much simpler and consists of a method to invoke the platform-specific share mechanism and is limited to sharing named URLs only. There is a complementary Web Share Target API in an early design phase to allow registering Web applications as the share receivers.
navigator.share({name, title, url})
Invokes the system-defined application selection and data share dialog to send the named URL to another application and returns a Promise
resolved when the share was successful.
<p>
<button
class="btn btn-lg btn-default"
onclick="share()">
Share PWA Selfies<br>with <b>Web Share</b>
</button>
</p>
const share = () => {
if (!("share" in navigator)) {
alert('Web Share API not supported.');
return;
}
navigator.share({
title: 'Progressive Selfies',
text: 'Grab your duck face the PWA way',
url: 'https://pwa.selfies/'
})
.then(() => console.log('Successful share'))
.catch(error => console.log('Error sharing:', error));
};
The Payment Request API allows Web applications to delegate the payment checkout process to the operating system, allowing it to use whatever methods and payment providers are natively available for the platform and configured for the user. This approach takes away the burden of handling complex checkout flows at the application side, reduces the scope of the payment provider integration and ensures better familiarity for the user.
With supportedMethods
parameter, the API allows the Web application to select the supported payment methods - for example only to allow credit card payments or payments processed by a specific 3rd-party provider - as well as configure its parameters. Methods are specified by the predefined identifier or by the 3rd-party URL. Note that the behaviors of the payment methods might vary. For example, the basic-card
predefined provider does not process any actual payments - its role is reduced to collecting the credit card details and returning it to the requesting Web application although 3rd-party providers might as well proceed with the actual money transfer as a part of the flow.
With details
parameter, the Web application should specify the total amount and currency of the payment. It also allows setting up the order summary information including the subtotals, order items, and shipping options.
With options
parameter, the Web application might specify what kind of customer data it requires to be able to fulfill the request. It may require a shipping address (requestShipping
), email (requestPayerEmail
), phone (requestPayerPhone
) or name (requestPayerName
).
The only payment method available on Apple devices is Apple Pay, and it is only functional on devices with fingerprint authentication (Touch ID). It is accessible via the proprietary non-standard ApplePaySession
API instead of the Payment Request API described here. The support for standard Payment Request API in Safari is available from Safari 11.1 on macOS and Safari on iOS 11.3.
paymentRequest = new PaymentRequest({supportedMethods, details, options})
Creates a payment request object with the requested amounts, currencies and methods configured.
paymentRequest.canMakePayment()
Returns a Promise
resolved with the value indicating if it is possible to conduct a payment using any of the supportedMethods specified.
paymentRequest.show()
Presents the checkout confirmation UI to the user or redirects to the system-defined application that accepts payments by a method selected. Returns a Promise
resolved with the response
object the payment provider successfully confirms the payment. Note that it may or may not already denote the money being transferred - it depends on the selected payment method implementation.
request.addEventListener('shippingaddresschange', listener)
An event fired when the user changes the shipping address data, allowing updating the request's details
using event.updateWith()
method.
request.addEventListener('shippingoptionchange', listener)
An event fired when the user changes the shipping options (delivery vs. pickup etc.), allowing updating the request's details
using event.updateWith()
method.
event.updateWith(promise)
Waits for a promise
to resolve with the new payment details and puts it into the request's details
.
response.toJSON()
A convenience method that serializes the payment response (including the requested payment details and the data returned by the provider) into JSON intended to be sent to server-side for order processing.
response.complete(result)
Signals the browser that the app-specific steps of payment processing (like sending the order data to the server-side) have completed. Returns a Promise
resolved when the Payment Request UI is cleared.
<p>
<button class="btn btn-default" onclick="donate()">
Donate 10€
</button>
</p>
<p id="log"></p>
const donate = () => {
if (!window.PaymentRequest) {
alert('This browser does not support Web Payments API');
return;
}
let request = initPaymentRequest();
onBuyClicked(request);
};
/**
* Invokes PaymentRequest for credit cards.
*/
const onBuyClicked = request => {
request.show()
.then(instrumentResponse => sendPaymentToServer(instrumentResponse))
.catch(err => document.getElementById('log').innerText = err);
};
/**
* Simulates processing the payment data on the server.
*/
const sendPaymentToServer = instrumentResponse => {
// There's no server-side component of these samples. No transactions are
// processed and no money exchanged hands. Instantaneous transactions are not
// realistic. Add a 2 second delay to make it seem more real.
window.setTimeout(() => {
instrumentResponse.complete('success')
.then(() => document.getElementById('log').innerHTML = resultToTable(instrumentResponse))
.catch(err => document.getElementById('log').innerText = err);
}, 2000);
};
/**
* Converts the payment instrument into a JSON string.
*/
const resultToTable = result => {
return `<table class="table table-striped">
<tr><td>Method name</td><td>${result.methodName}</td></tr>
<tr><td>Billing address</td><td>${(result.details.billingAddress || {}).addressLine}, ${(result.details.billingAddress || {}).city}</td></tr>
<tr><td>Card number</td><td>${result.details.cardNumber}</td></tr>
<tr><td>Security code</td><td>${result.details.cardSecurityCode}</td></tr>
<tr><td>Cardholder name</td><td>${result.details.cardholderName}</td></tr>
<tr><td>Expiry date</td><td>${result.details.expiryMonth}/${result.details.expiryYear}</td></tr>
</table>`;
};
The Shape Detection API gives developers access to features such as face detection, barcode detection, and even text detection. This is great for the web.
To understand how you might use it in the real world, consider the following example. Imagine you own a large shop that sells books. If you ever need to check the price of a book without a price tag, you can walk to the register and scan the label to check the price. But if you had a PWA on your mobile device with access to the prices of all the books, you could walk around the store using your mobile device and the barcode detector to quickly and easily give you the price of the book. This is just one example, but being able to detect shapes opens up a world of possibilities.
The following listing shows a basic example.
const barcodeDetector = new BarcodeDetector();
barcodeDetector.detect(image)
.then(barcodes => {
barcodes.forEach(barcode => console.log(barcodes.rawValue))
})
.catch(err => {
console.log("Looks like something went wrong:", err);
});
The Web Bluetooth API allows websites to communicate over GATT with nearby user-selected Bluetooth devices in a secure and privacy-preserving way.
The Web Share API allows websites to invoke the native sharing capabilities of the host platform directly from the web.
The Payment Request API is a system that aims to eliminate checkout forms by vastly improving the user workflow during the purchase process and providing a more consistent user experience, enabling web merchants to easily implement payment methods.
Modern browsers are capable of some amazing things: access to hardware features such as the user’s geolocation, device vibration, and even battery status are already available via easy-to-access APIs.
You can use the Shape Detection API to detect barcodes, text, and even faces inside images.
Here is a checklist which breaks down the things we learned in this code lab.
$ git checkout pwa-beyond-final