We’ve all seen and used them, we may even have written some ourselves. They seem such a great idea at the time and using them seems to make life easier. I’m talking about Framework Specific Components.
I want to talk about why WebComponents are much better and they should be the first think we look to use and create whenever possible.
Whether you put ads on a blog to earn a full time income as a blogger or use them to help offset the hosting costs of a community forum, it makes sense to maximize whatever revenue you can from the space you give up to ads.
Many people’s first (and only) experience of generating passive website income with ads is via Adsense and it’s a great system - you add some code snippets to your pages, ads show up to visitors and, if you have sufficient traffic to generate enough views and clicks on them, Google pays you some money each month. If you work hard at SEO to promote your site then it can contribute to the hosting costs or even generate a decent income.
So what if you could generate additional income on top of what Adsense was paying and at the same time, increase what Adsense was paying each month? It almost sounds too good to be true but it isn’t. Here’s how …
If you’re still paying $$$’s for SSL certificates it may be time to look at Let’s Encrypt which describe themselves as “a free, automated and open certificate authority”.
SSL certificates are now effectively free.
Sounds too good to be true? Unless you need some fancy green-bar EV certificate there’s really no need to be paying for SSL certificates anymore. Especially now there is a Go package to support automatic certificate generation.
It turned out to be easier to setup the auto-certificate system than it was to renew a paid-for SSL certificate, here’s how …
Hang out in any front-end web development chatroom for even a short period and you’ll come across the same set of “help me” or “how do it?” questions over and over again. They are not difficult issues once you have learnt about them but until you do and while you are learning frontend development they are often a source of confusion and frustration.
So I thought I’d try and cover a few of the common issues that seem to come up repeatedly …
If you run a website that relies on ad-revenue and you use Dfp to maximize your revenue then you might have heard of Header Bidding.
This can help improve revenue by collecting bids from multiple ad providers before passing the winners on to Dfp where Adsense and Adx have a chance to out-bid them. This increases competition in the ad-auction while also avoiding the latency from chaining ad requests in a waterfall via passbacks. There’s even a Prebid.js open-source lib to handle the process which most ad providers have adapters for so it’s not too difficult to implement.
But oh dear, the Google Publisher Tag script (GPT) used by Dfp is pretty big when you care about performance (and sadly, seems to have some problems with WebComponent polyfills right now).
Fortunately, there’s a new GPT Light script which is much more lightweight as you’d expect from the name. Here’s how you can use it.
While we’re used to systems nowadays being distributed and running across multiple services on multiple platforms when it comes to front-end web clients many people still have a rather “monolothic” outlook on things.
Many times this is down to technology imposing restrictions on us - it’s difficult enough to make some frameworks and all their component pieces work together to deliver an app without also throwing in the challenge of making multiple different frameworks coexist (part of the problem with the rise of “frameworks” instead of libraries).
It can be particularly problematic when an aging app needs to be upgraded. You may have an Angular.JS app and be faced with the choice of whether to re-write it as an Angular v2 / v4 app or switch to using the Redux / React stack instead.
Both represent a lot of work and the difficulty making frameworks coexist can be a real challenge with any attempts at doing things incrementally. This is where WebComponents can really help.
There are many challenges to understanding what our code is really doing at runtime:
We can’t always just attach a debugger and step through our code unless it’s running locally. But when it’s running locally, it’s not really representative of the live system. The only way to see what the live system is doing is to instrument it.
Fortunately, Google provide a fantastic tracing tool for their cloud platform which can provide valuable insights into your application even when calls span multiple services.
Here’s an example of using it to optimize code.
Polymer 2.0 is fantastic and the upgrade path from v1.0 has been well planned and implemented so it’s pretty smooth going and there are some great docs that explain the upgrade process to follow.
But there are still a few “gotchas” that might catch you out along the way and I think some are down to the fact that the upgrade process is so good, sometimes it’s easy to forget that between two elements in the project you’re switching entirely to the new API.
So, here are some things to remember when upgrading …
One thing that seems to come up a lot in the Polymer Slack is how and where to store configuration settings for an app, how to access them and related to that, how to generate URLs within the app.
There are a few different ways to do this and no one approach is going to be the best fit for all (depending on specific needs) but I’ll describe the approach that I generally use which has been working well for me.
One of the great things about Polymer and Web-Components is they are part of the platform. What I mean by that is that once you define an element, you can add some HTML containing a reference to it however and wherever you like and the browser will render it.
innerHTML and even though it may contain elements defined in the app, they won’t appear.
To show how useful it is, imagine we want to implement a markdown editor with Ghost-like image uploading …