Okay, it hasn’t quite been a week… but about 5 and half days, close enough to write about it!
I own two Karma Go’s. One is literally mounted to my living room wall. The other is packed into a carry pouch with a small USB jumper cable to an Anker portable battery pack.
While the default rated 5hrs of battery life is decent on it’s own I’ve calculated the duo to reach somewhere close to 45 hours of continuous use. Seems a bit extreme, but I’m doing a trial run with my Android phone to see how little mobile data I can use and instead piggyback off of the karma.
The majority of the places I go, I bring my backpack. Whether it be via car or motorcycle. If I can lower my celldata usage, and instead stick with the Neverstop for my all-around data, then I’ll drop my T-Mobile plan from the Unlimited at $80/mo to something closer to the pay as you go for $3/mo + any extra data pack.
Moto X Pure will get a range boost when it get’s 6.x update. Currently T-Mobile’s Band 12, 700Mhz is disabled in this phone, but fret no more! This is all coming from David Schuster, a senior director at Motorola, where he states on Google+:
4) As part of the Marshmallow release of the 2015 Moto X Pure Edition, LTE Band 12 is enabled for the T-Mobile network.
This was all hinted to in these discussions prior to this update:
Karma (an MVNO that piggybacks off of Sprints LTE network) just last week announced a new data plan option for your Karma Go hotspot device. It’s called Neverstop, a $50/mo option which allows 3 devices to have unlimited data while on your Karma account with your access capped at 5mb/s. While they say that this service isn’t planned to replace home internet connection; it *is* a home internet connection for those that didn’t have access to your traditional ISP before!
I live in a semi-rural area, just barely outside the range of any traditional cable or DSL ISP connection. Sure, I could get satellite, but that’s burdened with: contracts, high cost, data usage caps, speed caps, installation fees, device rental fees, and frequent congestion. It’s not worth it. Historically I have done the mundane route of using cellphone hotspot mode, hanging it up in a corner window, and connecting my devices to that. This has been a pain because there are so often hotspot usage caps that are difficult to circumvent, and your cellphone will be commandeered until you’re done using your internet.
Using Karma Go as-is
For your average use-case, you would simply authenticate all of your devices to the Karma Go, and then use it as-is… but I am not your average use-case. Due to how Karma Go handles individual connection, it’s not possible for two devices to talk to one another while connected to the same hotspot. But that setup isn’t going to fly if I try to use Karma with either a device that doesn’t have a web browser to login with (think Nest or IoT), or devices that require local cross-network communication (think Chromecast or a home NAS).
Welcome the best of both worlds!
To support my needs, I needed to setup my own oasis of WiFi that is powered by my Karma Go device. How to do this? Well, with two Linksys wrt54g routers (a little dated, but still work great)! I have flashed these devices with custom firmware provided by DD-WRT, and configured one of them to connect to my Karma Go in Client Mode, I then also set the device to use its what would-be WAN port as part of the switch. This then has a short ethernet patch cable that jumps into the WAN port of the other router, which is configured as your typical home router.
The Secret Sauce
To make everything talk nicely together you have to be running on different subnets so that the NATing can work properly. My client bridge is setup to operate on 192.168.2.x, and then the home network is 192.168.1.x… works great!
I figured that if a device is authenticating to the Karma via a browser, but doesn’t require the browser for usage, it must be operating on a lower level. My guess was that it keeps track of your computer’s network card MAC address. So I took note of what my Client Bridge MAC address was, powered off the device, and looked up the terminal command for spoofing your MAC address on your Mac computer (windows and linux computers also have this ability, but you’ll have to find out how on your own, google it).
I wrote down my Macs MAC address, then used this command to swap it’s MAC to appear like my routers:
sudo ifconfig en0 ether 00:e2:e3:e4:e5:e6
(assuming my Routers MAC is 00:e2:e3:e4:e5:e6)
I then went to the login page of my hotspot, and logged in! :
Harold, I see your argument, and do agree to some extent… but the way you explain some of the technology approaches seem slightly incorrect. Please let me explain:
The first remark is that I understand you to be is assuming an HD television broadcast consumes your data allocation. I do not believe this to be the case. Television signals travel down the same wire, yes, but they exist in a different frequency inside that cable. Meaning that your internet and television are isolated from one another. Here is somewhat simplified explanation of what I mean:
Let’s say though, that you’re $5 movie is coming from Netflix, Hulu, or what-have-you. Then yes that is a digital download, exists inside of your internet allocation, and will consume that data. This is no longer a Comcast problem, because you didn’t hand them the $5 for pay-per-view, you paid someone else that is delivering it to you through the internet. It will consume a portion of your allocated bandwidth.
Now, to continue with this bandwidth statement. Lets say you are downloading a giant bit torrent of files, your bit torrent will consume a large portion of your data because depending on how many peers you are seeding from or to will has it’s own persistent connection. This meaning that if the router is trying to level data across all open connections, then your one bit torrent download could have +20, whereas your netflix streaming would be 1 or maybe 2. Your netflix gets a much smaller slice of the pie. Is this Comcasts problem in this scenario? No, you are using your data bandwidth that you purchased in irresponsible ways. If you wanted your netflix to perform well, then you should throttle your bit torrent downloads.
For clarity, Comcast wouldn’t open another “virtual pipe” to accommodate this frivolous usage. They are currently throttling your overall throughput down to 20 mbps (or whatever your plan is set to). Many ISPs, on business services will offer “burst” allowances, where if you get a sudden surge in traffic, the ISP won’t begin throttling you down for a period of time. This, as stated is mostly found in business services, so that the business can better server their client.
Also, in either case. There should be very little reason why packets would start dropping. Even under a heavy load. Unless your internet was completely saturated and you started getting packet timeouts. You shouldn’t be dropping packets just because the going gets slow. If this was the case, it would be because your hardware is not performing like it should. Bad connection, interference, poor firmware, bad drivers, etc.
The real concern behind this article, is that it would allow ISPs to play mafia in the internet. They use the example of toll ways on freeways, but I think it could get much uglier than a dirt road.
Imagine you live in a neighbor hood, controlled by an ISP. They are the only ISP that services your area, so you must pay them to get around (on the internet). If they disapprove of you visiting Netflix, because THEY offer movie rental services too. Well they would individually throttle down your single connection(s) to netflix whenever you attempt to watch that movie. Eventually you will get pissed and go buy it somewhere else…. Like directly from you ISP! Netflix did nothing wrong, their internet bandwidth was plenty fine, they had ample room for you, and was accommodating in any way they could. Your ISP, overseeing your activity discriminated against Netflix, and without your discretion or knowing provided you a bad experience to help steer you into a direction they want you to go.
This sort of power could change everything from spending down to elections. You steer your users to only the data you want them to access by putting up barriers to the places that you don’t want them to see. I’m rarely viewed as an extremest when it comes to political matters. But this could be used as a form of corporate communism, the most dangerous part of it is because the end user could be completely oblivious to the happenings that are going on.
As of recent I have been playing with my Rpi, and it has brought me to a curious decision. It is a pain to constantly shift code from my laptop to the pi for a quick test, them make adjustments and try again. Mercurial is great, allowing me to push and pull changesets all over the place… but I feel that I have reached the point in my life where I must learn the “Vim”.
I have maintained this personal joke about myself that I am not a “true programmer” until I learn python. Interestingly enough, the Raspberry Pi was introduced and thus held my hand (and mental reasonings) into the world of python. Along with that experience I am now met with this need/desire to learn the ‘Vim’s… So with thus, for my Pi projects on-forth Vim will be my editor of choice.
I will teach myself Vim the same way I taught myself to type… brute force.
I am in the market for 2.5″ SATA HDs, I have a server that insists that 2.5″ is the future, so that is what it holds. I wanted to know what was my best price per gig, so compiled below is a current (time of post) comparison with 2.5 SATA HDs found on Newegg.com. The reason for compiling this table was to determine the best value for space per dollar. Since most of these drives are 5.4k rpm, it is understood that the whole reason for these drives is simply storage. This makes many of the other speed factors become next to negligible. I plan on populating a server with these small and cheep drives, and running in some form of RAID to help boost the performance a tad.
You will notice that there is a 750 GB drive which has a better rating, the only reason for this is because it is refurbished, and has a limit to 5 per customer. Not exactly a standard product or pricing. Actual pricing starts at about $10 higher for the 750 GB models, which is also listed.
tldr; Shared directory sym links under windows for VMware Fusion
At my new work, it is mostly a Mac shop. And since I develop with a few tools that are Windows only, I needed a solution to cope with my needs. Dual boot/bootcamp wasn’t an option in no way did I want to be constantly restarting my system, and I also doubt that bootcamp drivers were even available for my Mac with retina display.
I could have gone with VirtualBox, my trusted VM solution of choice, for many years… but instead I had done some more research into the matter and discovered that VMware Fusion was a very specific but powerful solution. It is also a commercial product. Retailing just under the $50 mark, it isn’t terribly expensive for what you will be getting. And shadows in comparison to the cost of the Windows OS I would be needing to purchase as well.
I went the VMware Fusion route and have been exceedingly pleased. This product is not an end-all for any of my virtualization needs, VirtualBox will remain my vm of choice for Linux based hosts… But the performance and flexibility of VMware is smooth and beautiful, it provides better hardware drivers to things like the graphics card and it will run with Windows Aero enabled without even blinking an eye. Full screen or in Unity (interlaced / seamless) mode it works great.
I have not tried gaming within the VM yet, but my guess is that it works well. Prob not perfect, but well enough to make it do-able.
One hurdle I had to overcome was shared directories.
Mapping a directory via conventional means didn’t work. I am a developer, so when using a source code compiler, it tends to be incompatible with network paths. Often it attempts to resolve it or use .bat scripts which cannot cope with network paths. So the solution is to map a network path to a local directory using a symlink under windows.
mklink /d “C:\Users\andy\Repositories” “Z:\andy On My Mac\Documents\Repositories”
The above command is an example of something I have used to resolve such an issue. The only downsides is that the IDE doesn’t pickup external changes automatically. If you change something outside of the IDE, you must refresh the directory. Not a big deal, since its the click of one button and changes are found. The other downside is compile time. What would normally be a 10 or 15 second compile turns into a 20 to 30 second compile. Nearly double. This is believed to be because it must loop through a virtual network adapter to give it the network-like features. There are ways around this as well, my IDE has a ‘bridge’ feature to allow compiling on the host, but I haven’t bothered to set that up yet because even though there is a slightly longer delay, the system works very well and is very stable.
Living situation: all roomates have now been moved out!
Much hardware etc changes to catch up with.
To elaborate a little bit more:
Point number 1 : When I first got my house, I figured it would be a good idea to have roomates. They pay me, I get rich! Oh was I wrong. Crossed between late rent, lies, and disrespect this experience has left me with nothing more than a new understanding. Roomates suck. I am now free. I have some cleaning up to do, but free nonetheless.
Point number 2 : For the first time in my professional career I have transitioned to a new employer. I quit my job at Johnson Center for Simulation, and started working at Clockwork Active Media Systems. The several weeks before the transition was wrought with interviews, prep work, transition prep. Additionally the several weeks following (going on 3 weeks now) has been all about learning-curves and adjusting, oh and producing.
Poing number 3 : Work has issued me a MacBook Pro Retina. I am not a Mac person, never was. It was one of my least favorite mainstream OSs. Since using it it has definitely traded places with Windows (especially windows 8). I have learned how to be much more proficient with it, but to be fair I did mark my Mac with a sticker of tux eating the apple logo.
It seems as of recently I have been very pressed for time. This isn’t stressing me out, as I am making progress in many lights of my works, but all the same I feel some places are begining to show signs of neglect.
Facebook… Which, I could really care less if people start wondering why I haven’t posted about “Whats on your mind?” recently.
(much) More importantly this blog. I have a list of things I want to post about, but haven’t had a lot of time to sit down and write a few good posts (I have been too busy writing code)
Here is a list of items I wish to blog about (for letting you know what *might* be coming, and also so I don’t forget)
My new love:Everything I wanted to keep from SVN, everything I wanted stability in with bazaar.
Convert, with filemap
With google code projects
With personal SVN projects
Mercurial’s best friend, and should be yours too. In this post I will explain why.
Using kiln with your projects
FogBugz (haven’t used it much yet, but it’s part of the package)
My work currently uses VersionOne, I will compare the two and tell you why I wish to convince my boss to switch.
Some projects, and getting them done and out-the-door
Updates about what I have been doing
Projects recently released
Oh so much more that I don’t currently have processing in the conscious side of my brain.
I have recently begun using Windows PowerShell for some bash-like tasks under windows. Thus far I like it much more over the cmd. One minor note about this is by default the text is white with a blue background. Why is this a minor note? Well it scares me quite often!
I run virtuals on several different servers and this workstation. Some of them virtuals are windows-based. Making the PowerShell window (filled with text) look eerily familiar to a BSOD makes me jump every time I catch a glimpse of that window out of a corner of my eye. Subconsciously it has been engraved into my mind that blue screens with white text means windows has crashed. Now the exception is PowerShell.