How To Save Blink Camera Recordings

**Update** June 2020

In May 2020, the Blink Team sent an email indicating that the scripts leveraged in this blog post/GitHub are using an outdated method for authentication. I have confirmed that the as-implemented method is no longer functional. If time permits or demand is high enough, I will work on incorporating functional changes.

The E-Mail communication sent from the Blink Team with a fair warning.


For years, I fought with the idea of getting security cameras for the exterior of my residence, straddling between “I want all control and no monthly costs” vs. “I want some of the next-gen features and iterative software improvements” vs. “I care about my privacy too much,” among other considerations. If you are like me, you sought a camera system that didn’t require power cables, was wireless, and didn’t have a mandatory cloud storage subscription attached to it. I had come to my senses last year and realized that a cost-effective solution that is low-touch and has relevant features left few options to consider. After a decent deal on Prime Day this last year, I decided to get a three-camera, Blink XT2 system. Amazon acquired Blink in late 2017 for approximately $90 Million, effectively pairing a cost-effective camera solution with a cloud and retail/distribution powerhouse. I speculate that some of the feature/function gaps are the result of a maturing acquisition and relatively new product offering.

Feature/Function + Caveats

The Good: The Blink ecosystem provides a cost-effective solution for video surveillance. We get anywhere between 2-3 months out of a set of Energizer Lithium batteries for each camera, and this is not with the default setting as some cameras are persisting more than ten seconds of video for a given event. Additionally, I have bumped up the clarity on the cameras as the battery life is already acceptable to us. Lastly, the settings and ability to configure/tune are relevant and not too complex to understand.
The Bad: There are a few minor annoyances like the cameras being busy when motion is detected or when the video is presumably being saved. The more prominent annoyance is the lack of a web interface or otherwise, to save multiple videos. Blink provides you with 7200 seconds of recording time instead of a hard limit, like 5 GB. This isn’t fantastic by any means and makes it difficult for high-churn setups to provide latent value, meaning, going back to 20 days prior and looking for a specific recording might not be realistic. In my usage scenario, 7200 seconds is approximately 1 GB of data, averaging just over 136 KB/second of recorded video; it is not an extreme amount of data from a storage perspective. In my case, with 3 cameras, I get slightly more than 1 week’s worth of storage.

The Solution (For Technical Folks)

Some months back, I looked for solutions that could persist the recordings on my local storage for later use. I found that someone had built the protocol scaffolding and done the dirty work of documenting an undocumented API. I forked the protocol code, added some comments, and a few other things and published on GitHub after that.

The code can be found here:

For my setup, I performed the following steps:

  1. Ensure you have Python installed, if not, Python can be easily installed here:
  2. Clone the repo from GitHub or just download that single .py file,
  3. Once saved, modify the .py file to contain your username and password. I loosely recall an issue with specific characters in the password field so if you are getting weird errors and have special characters, maybe change the password and remove them
  4. At this point, you can run that file ad-hoc or you can make a scheduled task/cron job for this to run on a nightly basis. I have a scheduled task set up on a Windows system that runs every 24 hours.

Manual Run

Below is an example of running it ad-hoc. The recordings (.mp4) will be stored in whichever directory you are running the .py file from, in my case, S:\Temp. Python is already in the PATH and is executable from any directory.

A single run of the Python script from a Windows Command Prompt

The Scheduled Task

Adding a scheduled task is easy and is a consistent way in which you can retrieve the files without intervention. While I won’t go into detail, I will post some screenshots of how I have the scheduled task setup.

Manual Solution (Non-Technical, Awful)

If all of this is too complicated, an arduous failback is using the Blink mobile application to save the videos. As of this writing, you can do this one video at a time, and it will add it to your camera roll, at least on iOS devices.

The way to save videos inside of the Blink App on iOS

Looking Forward

If there is decent traction on this post and others who want to do this with little to no technical knowledge, I may consider porting this over to a single executable that can be run with little to no expertise in the field. That said, if it becomes too prolific and causes Blink/Amazon angst, they will easily find ways to play cat and mouse and prevent this from happening at scale among their user base.

In the long term, hopefully Blink/Amazon build upon the ability to historically archive recorded assets.


Learning Resources for Aspiring and Veteran Sales Engineers


I’ll go out on a limb and guess that talented Sales Engineers are difficult to find. That said, most good talent is hard to find. As I work to bring myself up to speed on the depths of a Sales Engineer, and the problems they solve/value they provide, I realize that the role is absolutely vital to a customer-centric sales organization. After years of being on the buying side of Enterprise Technology and observing what makes an excellent SE, I want to formalize some of my observations into concrete learning. I am dedicating time to learn from the years of experience others so graciously produce for all to consume. I’ll be listing a few resources for both aspiring and seasoned Sales Engineers as learning isn’t below anyone, regardless of their experience.

The Material

We The Sales Engineers by Ramzi Marjaba – A Podcast + Blog

First up is a series of podcasts and blog posts organized and produced by Ramzi Marjaba. Ramzi has been churning out podcasts since April of 2018, intending to help Sales Engineers either obtain a career or grow in the SE field. His podcasts are probably the quickest and most frictionless way to absorb a broad spectrum of experience from industry veterans. He also has a ton of resources on his website that span the work of others in the field.

You can visit We The Sales Engineers here:

Mastering Technical Sales by John Care

Second, is a book written by John Care called Mastering Technical Sales: The Sales Engineer’s Handbook. I have read this book and found that it covers nearly everything that could possibly be a contributing factor in a good Sales Engineer. There are tons of practical ideas from someone who brings decades of functional expertise to the pages. Some of you may find topics that are elementary or are things you have been doing for years, but don’t let these topics steer you away from reading the remainder of the book.

A pure, and very rare, non-affiliate link:

Sales Engineering University [SEU] by Pat Trainor

Last up is a YouTube Channel graciously produced by a gentleman named Pat Trainor. When it comes to visually learning both sides of the SE house, it doesn’t get much better than what Pat has put out there. Pat’s presentations provide immense amounts of clarity into the SE world. He slowly builds on topics and is cognizant of not taking shortcuts just for the sake of brevity.

The YouTube Channel can be found here:


If you are a new Sales Engineer or looking to become one, the resources above are top notch. Before going into an interview or embarking on your next quarter, absorb as much of this as you can. Look for critical similarities in the material and apply what is relevant to your specific field. Memory Consumption When Web Dependencies Not Available


Grammarly is a pretty good tool to possess for work, blogging, or writing general emails. I have found it to be that extra set of eyes to catch the obvious things in addition to those subtle nuances in writing that are easily missed. We started using Grammarly a few years ago, stopped due to pricing, and then started back up again about one month ago. They have two service tiers, broken down to Free and Premium offerings. The premium subscription is close to $12/Month when paid annually, so by no means is this a cheap service when comparing to the value one may extract from something like Netflix for a similar dollar amount.


One of the many functionalities offered is an extension for popular browsers, wherein it takes the texts from text fields on web pages and applies the same linguistic rigor as it would to your email, if you desire it to do so. To fully leverage this piece of functionality, you are directed to log in to, and the extension will recognize you are a Premium user.

The Problem

So now that we are trying to login to, it should be fairly simple, right? Well, if one of the dependencies on the website is not found, it begins to throw an enormous amount of exceptions, ultimately causing Chrome/Firefox/Brave to slow down to a crawl and consume insane amounts of memory. At one point, I let it keep going, and Chrome got to nearly 10GB of RAM consumed as a result of this issue.

After doing some digging, I noticed that a single exception seemed to repeat itself. The URL is inaccessible due to ERR_NAME_NOT_RESOLVED. This isn’t an uncommon occurrence and happens on nearly every website I visit. This is a consequence of employing selective DNS resolution using something like Pi-Hole. While I am not an expert on Front End web development, it’s just a wild guess that an unavailable dependency shouldn’t cause the entire browser session to slow to a crawl and potentially crash a system.

Google Chrome Incognito Screenshot – Over 100,000+ exceptions in <60 seconds – Impressive!

The Solution

The solution is fairly simple – Whitelist the given domain name in Pi-Hole. After doing this, I was able to perform the necessary actions on their website. If, in your case, you can’t/don’t have this level of knowledge or control, try another browser. I noticed that Microsoft Edge (non-Chromium) seemed to be fine when navigating their site.

Various entries appear showing the domain is being blocked by the DNS resolver

Support Engaged

While the solution is easy for me, being that I have control over name resolution, it may not be so easy for others. Additionally, what if that endpoint just so happens to be down for an extent of time? Everyone that attempted to visit the site in Chrome/Firefox-based browser would be immediately turned off. So I decided to let support know about the problem in hopes that they would be able to A.) understand it and B.) eventually resolve it. I imagine it’s a smaller team at Grammarly, and defects like this shouldn’t take too long to remediate. I opened a ticket, provided some details in addition to a screen recording of the tens of thousands of exceptions being thrown. The response was not that great, so I attempted to articulate the problem in another way. That also yielded nothing, hence this blog post. This isn’t like I am asking them to fix elements that aren’t displayed correctly because I am using 6 layers of advertisement blocking. No, this is causing the system to nearly crash, an effective denial of service.

Support Correspondence

Below is my first note to support upon opening the ticket:

My first response to support when opening the ticket!

And their response:

Grammarly Supports first response

And my inevitable followup:

My followup and second response.

And their final response in addition to closing the ticket!

Sure it is! | When it doubt, close it out!
Grammarly Supports second and final response


This is petty and likely didn’t warrant all of this. That said, I attempted to Google the problem twice over two weeks to no avail. There was also some chatter on Reddit about Grammarly crashing someone’s browser and some tin-foil hacking responses, so I figured this post might quell some of those fears and hopefully appear in one Google search; I can’t be the only one on planet earth experiencing this. Best case scenario, someone from Grammarly who can own the problem and bring to resolution sees this post! All of this said, Grammarly is an awesome product that has tons of potential ahead of it. The ability to positively impact someone’s writing is near limitless, while having writing samples from a large population of users enables some next-level data insights, further elevating the writing skill of people.

Intercept and Redirect DNS Requests

Updated: 12/28/2019


DNS – An underlying primitive that is simple yet powerful, all for a highly functional and frictionless internet. Many of us are aware of how authoritative DNS is; that which controls DNS can control nearly all experiences.

Shortly after adding Pi-Hole, a DNS based ad-blocking system, to my home network, I noticed that specific devices/functions didn’t seem to be leveraging the expected DNS settings. After looking at the Pi-Hole logs and a few captures of mirrored traffic across the network, I realized in some cases, that both the obtained [DHCP] or configured [Static] name server settings, were being ignored by offending devices. I believe the intent of this behavior by the device is for a few potential reasons.

  • An attempt to have absolute “control” over record resolution
  • A modest attempt to avoid downtime as a result of a “down” DNS endpoint
  • An attempt at prevention of “selective resolution” by upstream DNS [Pi-Hole’s primary capability]
  • Sheer ignorance and/or poor development practices

Notice the theme in the above points – They are all attempts, rudimentary ones at that. Nearly all home network users will never notice when a device is ignoring the intended configuration state relating to DNS settings. But for those who are cognizant about their privacy and time in front of devices, these attempts are easy to overcome, for now at least. DNS, in its traditional form, is ripe for manipulation. Being based mainly on UDP and without security baked in, interception and manipulation of connectionless and insecure protocols is relatively easy. ISPs have been guilty of intercepting and rewriting DNS requests for decades, being monetarily incentivized or politically pressured to do so.

Solution Overview

In this post, I’ll specifically cover how to prevent these elementary attempts at non-compliance by leveraging NAT on a Ubiquiti EdgeRouter device. In doing so, all DNS requests which anticipate passing through the default gateway, are redirected to the sole provider of DNS on the network. For example, if a Roku device is attempting to reach out to proprietary DNS servers at, since this will naturally pass through the default gateway, Ubiquiti EdgeOS will redirect the insecure and connectionless request to the desired endpoint, in my example. Also, as a catch-all safeguard, I prohibit outbound traffic traversing over UDP/53 to have one more ounce of prevention.

This type of redirection is configured with two separate NAT rules. One rule for modifying the destination address from a public IP, such as Google’s public DNS [], to a desired internal endpoint [], capable of fulfilling the same protocol promises [DNS resolution]. The second rule is for modifying the source address from that of the original client’s address [e.g.,] to that of the gateway that would have been responsible for routing the packet to its original destination []

Web UI of ER-Lite | The previously described NAT rules, shown in the NAT overview
Web UI of ER-Lite | The previously described NAT rules, shown in the NAT overview

Solution Implementation

The first rule [4053 shown below] is responsible for:

  • Classify any packet ingressing interface eth1.10 leveraging either TCP or UDP port 53 that is NOT originating from an address in the group ‘DNS_Servers’
  • Modify the Layer 3 destination address in said packet to be [The IP of a Pi-Hole interface]

set service nat rule 4053 description 'DNAT_PIHOLE'
set service nat rule 4053 destination port 53
set service nat rule 4053 inbound-interface eth1.10
set service nat rule 4053 inside-address address
set service nat rule 4053 inside-address port 53
set service nat rule 4053 protocol tcp_udp
set service nat rule 4053 source group address-group '!DNS_Servers'
set service nat rule 4053 type destination

The second rule [5053 shown below] is responsible for:

  • See the note at the bottom of the post [Edited 12/28/2019]
  • Classify any packet egressing interface eth1.10, leveraging either TCP or UDP port 53 and destined for [Pi-Hole] that is NOT originating from an address in the group ‘NAT_DNS_REDIRECT’
  • Modify the Layer 3 source address in said packet to be that of interface eth1.10 [e.g.]

set service nat rule 5053 description 'MASQ_PIHOLE'
set service nat rule 5053 destination address
set service nat rule 5053 destination port 53
set service nat rule 5053 outbound-interface eth1.10
set service nat rule 5053 protocol tcp_udp
set service nat rule 5053 source group address-group '!NAT_DNS_REDIRECT'
set service nat rule 5053 type masquerade

In the WebGUI, this is what the configuration looks like. Note: There are subtle differences in naming conventions between the configuration commands and what is displayed in the below images.

Web UI of ER-Lite | The previously describe NAT rules, shown individually
Web UI of ER-Lite | The previously describe NAT rules, shown individually


Once implemented, any endpoint that is attempting to access a DNS resource which does not reside on its local network, will go through those two NAT rules and answered by Pi-Hole. One downside to this exact implementation is the loss of query fidelity from a logging perspective. The Pi-Hole logs will show the IP of the gateway performing the NAT translation and not that of the original client, as depicted below.
One consequence of this setup is the loss of the originating client IP address.

One consequence of this setup is the loss of the originating client IP address.

For the clients that are attempting to utilize an “unapproved” DNS resolver, the NAT rules + Pi-Hole fill the void without the client ever knowing that the request was intercepted and manipulated. As shown below, the client sets the DNS server to be, and the request is seemingly fulfilled without issue.

nslookup on Windows | DNS answer comes back without issue
nslookup on Windows | DNS answer comes back without issue
Wireshark | Packet Capture of End-to-End Flow | Query > NAT > Answer > NAT

If you enable logging for these specific NAT rules, you can tail the messages log on the Edge Router. Tail the log using: tail -f /var/log/messages – Continuing with the example above, here are two log entries that are generated from a single DNS lookup, which is subject to this NAT redirection. If the log is too noisy, be sure to disable the logging of other services such as the firewall or other NAT rules where large amounts of traffic are subject to a logging rule.

Dec 27 16:03:43 ER-LITE kernel: [NAT-4053-DNAT] IN=eth1.10 OUT= MAC=REDACTED SRC= DST= LEN=54 TOS=0x00 PREC=0x00 TTL=128 ID=11067 PROTO=UDP SPT=57439 DPT=53 LEN=34

Dec 27 16:03:43 ER-LITE kernel: [NAT-5053-MASQ] IN= OUT=eth1.10 SRC= DST= LEN=54 TOS=0x00 PREC=0x00 TTL=127 ID=11067 PROTO=UDP SPT=57439 DPT=53 LEN=34

Update: 12/28/2019 – NAT Rule 5053 Explained

The second rule, rule 5053, could be optional in your environment, depending on your setup. I will attempt to formulate an explanation, and if any clarity is required, please do reach out.

In my case, the Client + Pi-Hole + Gateway are all in the same subnet. Without the second rule, the Pi-Hole would respond back to the original client in a seemingly unsolicited fashion. To elaborate, the Pi-Hole [] would send a DNS answer directly back to the client [] even though the client was expecting a response from and will consequently drop the packet as it’s not in the operating system’s state table with regard to expected network communications. In other terms, if your Pi-Hole is in a separate subnet from the clients attempting to access unauthorized DNS servers, the second rule is unnecessary.

Let’s take a look at a packet capture of my example setup with the second NAT rule [5053] disabled. In the packets shown below, you’ll note that there appear to be two DNS queries from the client,

If you look closely, you’ll notice the Layer 2 address [MAC Address] is different between the two queries. The first packet being that of the actual source and the second packet is forged due to NAT rule 4043 and has the MAC address of the Ubiquiti Edge Router. From the Pi-Hole perspective, it receives a packet with a source IP [] on the same subnet as itself, This will cause the OS that Pi-Hole is running on to look at it’s ARP table for an IP to MAC address mapping. If one does not exist, a broadcast would be sent throughout the subnet and the real client [Asus::c9@] would respond to the ARP. The Pi-Hole would then proceed to send the DNS answer to but the client is expecting a response from and therefore drops the packet, being unsolicited.

Note: It is very uncommon to use NAT in the same subnet; in effect, we are translating 10.10.1.x to

Proxying Ads and Trackers

We are avid consumers of Pi-Hole, the DNS based ad-blocker, which can run with minimal computational resources. After years of using DNS based ad-blocking, advertisement entities continue to catch on and develop ways to circumvent this simple method for blocking ads. For most of us using any mechanism to block advertisements, when one comes across your screen, or your significant other complains, it’s all hands on deck to understand how to block the offending material.

Pi-Hole Dashboard View – Showing Blocking and General Client Activity [Pi-Hole Admin Web Interface]

Some brief background on Pi-Hole – The internet doesn’t need another blog on how to setup/configure/use Pi-Hole, so I won’t be covering that. I did want to hover around the 10,000-foot view just for a minute. In short, DNS is a fundamental technology that translates hostnames [Ex.] into IP addresses [Ex.]. Pi-Hole is configured using lists, many of which are constantly curated by the community, and they contain known hostnames that serve up advertising/tracking/analytics. When a client, such as your cell phone, attempts to reach an endpoint on the internet via its hostname, Pi-Hole will respond to this query appropriately by comparing the requested DNS record to the lists previously mentioned. If the requested resource is in the “Blacklist,” Pi-Hole responds back with the address, effectively rendering those elements that depend on that endpoint, useless. In this case, the advertisement element would effectively be blocked.

The response packet from a Pi-Hole endpoint, responding with for a Google Ads hostname [Using Wireshark]

Recently, we have been experiencing particular sites where Google Ads were being served up in iOS apps and websites, leveraging browsers that couldn’t block HTML elements. I am a proponent and regular of the app SmartNews on iOS; hands down, the best news aggregator app on the platform. Stories from commonly appear in my aggregated news feed within SmartNews, and with relative consistency, I consume the material they publish. is a news outlet dedicated to electric transportation and sustainable energy.

Left: SmartNews on iOS [iPhone X] | Right: A story from Electrek with the “Ad” highlighted in red

As seen above, there is an advertisement served from what looks to be like Google Ads. After looking at the Pi-Hole logs for a couple of minutes and being unsuccessful in finding an offending domain name, I fired up Edgeomium [Dev Channel] on the desktop to look closer at the advertising elements involved. Note that this instance of Microsoft Edge doesn’t have any ad-blocking extensions installed and is still in its default configuration.

Example #1 of a Google Ad shown on – Not being blocked at the DNS level
Example #2 of a Google Ad shown on – Not being blocked at the DNS level

After navigating to the relevant article page and right-clicking one of the Google Ads then choosing “Inspect” to bring up the developer tools, I noticed what seems to be a growing and tricky tactic now when serving advertisements. This tactic entirely circumvents Pi-Hole’s ability to interject and prohibit an offending DNS request since the advertising elements, and their dependencies are perceived to be coming from the parent domain, What I found interesting, was the fact that they are developing circumvention’s specifically targeting DNS based ad-blocking abilities such as those core to Pi-Hole functionality.

Looking deeper into the developer tools console, I noticed that the first attempt to load the advertising elements seemed to be organic, in that it attempted to load them directly from and not via a proxy mechanism.

The fallback in this specific case is to “proxy” the ads. It seems that an HTTP endpoint is materialized on the side, which will serve the content in a proxy like fashion. Note the “Perceived serving endpoint” and the “Actual serving endpoint.” The “src” attribute for the image clearly shows that is the serving constituent. Additionally, note the “data-aproxy-src” attribute which indicates the intended endpoint that contains the relevant content. In this case, its an image of an electric vehicle plugged into a wall charger.

The “Fake” Image Source:

The “Real” Image Source:
Under normal circumstances when using Pi-Hole, this image object would absolutely be blocked as the domain name would resolve to, seen below:

The example above is a single image. Note that the majority of these intrusive advertising/tracking/analytics resources are being proxied. See the numerous scripts which also have proxying attributes added to them in the event they can’t organically load.

In summary, advertisers and website operators will continue to progress towards an unstoppable method for delivering the things the majority of us despise, ADVERTISEMENTS. The mechanism described above will only become more prolific as advertisers will make it very simple if it is not already, to perform this proxying ability.

Additionally, some of the others tactics which I foresee becoming more prevalent are:

  • Browsers beginning to tunnel DNS over HTTPS [DoH] (or Operating Systems for that matter)
  • Bolstering restrictions on what extensions can and cannot do in the browser context
  • Convergence of namespaces to further obfuscate and blur desired content vs. advertising content
  • Leveraging WebAssembly [Wasm] potentially rendering content analysis and detection more difficult

For now, I only have the time and patience to just deal with this peeve and method of circumvention. One idea that comes to mind towards a resolution, outside of using a traditional browser with the ability to block elements inline:

  • Man-In-The-Middle (MITM) Proxy – I would have to install a profile on iOS for the certificate to be acceptable on my mobile device.