If you run a website that relies on ad-revenue and you use Dfp to maximize your revenue then you might have heard of Header Bidding.
This can help improve revenue by collecting bids from multiple ad providers before passing the winners on to Dfp where Adsense and Adx have a chance to out-bid them. This increases competition in the ad-auction while also avoiding the latency from chaining ad requests in a waterfall via passbacks. There’s even a Prebid.js open-source lib to handle the process which most ad providers have adapters for so it’s not too difficult to implement.
But oh dear, the Google Publisher Tag script (GPT) used by Dfp is pretty big when you care about performance (and sadly, seems to have some problems with WebComponent polyfills right now).
Fortunately, there’s a new GPT Light script which is much more lightweight as you’d expect from the name. Here’s how you can use it.
While we’re used to systems nowadays being distributed and running across multiple services on multiple platforms when it comes to front-end web clients many people still have a rather “monolothic” outlook on things.
Many times this is down to technology imposing restrictions on us - it’s difficult enough to make some frameworks and all their component pieces work together to deliver an app without also throwing in the challenge of making multiple different frameworks coexist (part of the problem with the rise of “frameworks” instead of libraries).
It can be particularly problematic when an aging app needs to be upgraded. You may have an Angular.JS app and be faced with the choice of whether to re-write it as an Angular v2 / v4 app or switch to using the Redux / React stack instead.
Both represent a lot of work and the difficulty making frameworks coexist can be a real challenge with any attempts at doing things incrementally. This is where WebComponents can really help.
There are many challenges to understanding what our code is really doing at runtime:
We can’t always just attach a debugger and step through our code unless it’s running locally. But when it’s running locally, it’s not really representative of the live system. The only way to see what the live system is doing is to instrument it.
Fortunately, Google provide a fantastic tracing tool for their cloud platform which can provide valuable insights into your application even when calls span multiple services.
Here’s an example of using it to optimize code.
Polymer 2.0 is fantastic and the upgrade path from v1.0 has been well planned and implemented so it’s pretty smooth going and there are some great docs that explain the upgrade process to follow.
But there are still a few “gotchas” that might catch you out along the way and I think some are down to the fact that the upgrade process is so good, sometimes it’s easy to forget that between two elements in the project you’re switching entirely to the new API.
So, here are some things to remember when upgrading …
One thing that seems to come up a lot in the Polymer Slack is how and where to store configuration settings for an app, how to access them and related to that, how to generate URLs within the app.
There are a few different ways to do this and no one approach is going to be the best fit for all (depending on specific needs) but I’ll describe the approach that I generally use which has been working well for me.
One of the great things about Polymer and Web-Components is they are part of the platform. What I mean by that is that once you define an element, you can add some HTML containing a reference to it however and wherever you like and the browser will render it.
innerHTML and even though it may contain elements defined in the app, they won’t appear.
To show how useful it is, imagine we want to implement a markdown editor with Ghost-like image uploading …
So for various reasons I decided to write my own blog engine and because I think AppEngine is such a fantastic platform (especially for hosting multi-tenanted apps) that is what the initial runtime implementation uses.
I now have a few blogs setup which demonstrate the multi-tenancy working so I thought it was a good point to do some back-of-the-envelope math to figure out how much it would cost to host or how many blogs I could host for any given price.
Remember - one of my motivations for doing this is that I’m too cheap to pay for someone else to host my blog and I also want to host blogs for friends and family without ending up on the endless treadmill of maintaining servers, database backups and WordPress plugins.
So how’s it looking so far?
Just a quick snippet for integrating Google Analytics with Polymer Single Page Apps.
As well as tracking ‘page’ views anytime the route changes, it also uses Google Analytic’s Exception Tracking feature for cheap-as-chips error reporting - handy to check on any browser compatibility issues you might have missed.
I saw this data from a Polymer product manager buried in a GitHub issue comment and thought it was worth highlighting to show just how amazing Polymer 2.0 is when it comes to framework-size. Many JS frameworks have made grand promises about the small size of their apps but it’s usually only after an inordinate amount of work and effort that you even get close to the promised values, if at all (my Angular 2 app was never anything buy huge).
Polymer is very different to many of the JS-first frameworks in that it builds on the web platform instead of trying to reproduce and replace so much it - because browsers are actually very good at parsing HTML and rendering it quickly if only you let them get on with it (yeah, who knew!).
Here are the individual + combined sized of the different Polymer versions together with the Polyfills required in each browser …
One of the really great things about Polymer 2 beyond the framework itself is the well thought-out tooling with the polymer-cli helping you with project templates, local dev-serving, testing, linting, build and packaging.
The only thing it doesn’t do for me right out of the box is create the information needed to add http/2 server-push. It will actually generate link prefetch tags that go into the index.html page which does speed things up but what I really wanted was to push the dependencies for a view on first load with the first request.
It turned out not to be too difficult at all …