If you are one of my followers at Twitter (@marcosluis2186), you should note that I’m a Buffer fan. This incredible tool and its ecosystem has allowed me to grow my personal brand in every major Social Media platform using its clean UI and its powerful analytics dashboard. I’ve been using Buffer for a while since I returned from my previous work in Venezuela, where I was a HootSuite Ambassador for LATAM. Don’t make wrong: HootSuite is a great Social Media Management platform, but it’s not made for my country, for the characteristics of Internet connections here, and HootSuite is based in Adobe Flex and ActionScript, which is very heavy in terms of network data use and speed here; instead Buffer which has a clean and simple dashboard with a small fingerprint using HTML5 and CSS 3 mainly, united to they use Amazon CloudFront like its CDN to distribute its static files around the world, and it’s works great. But these are not the unique features why I use Buffer everyday now; there is more benefits if you get in this boat. Keep reading.
Yesterday, I was reading the Docker newsletter from March 12, and I saw a lot of interesting links about how many people use Docker for several purposes. One of the writeups that picked my curiosity was a post written by Kenny Bastani called “Getting Started with Apache Spark and Neo4j using Docker Compose”. This article described in a simple manner how to use Apache Spark and Neo4j using a Docker-based image for it. Then, I began to search about Apache Spark in the Enterprise and in the process, I stopped in the Databricks site. This incredible team has developed an easy way to create an Apache Spark cluster with Databricks Cloud, which allows to deploy Spark As a Service in an unified Cloud-hosted data platform.
Apache Spark aims to become The Platform for Big Data for its incredible performance, ease of use and built for fast result delivery. Then, I spent almost a day reading everything I could about Apache Spark, and I found this great article at LinkedIn written by Kavitha Mariappan (VP of Marketing at Databricks), where she described why every company interested to extract value using from Apache Spark, could use Databricks Cloud for it.
Then, I began to think in the long term for Databricks Cloud making questions like: What about the new development trend to “Dockerize” everything? Docker containers is an amazing way to provide a fast, secure and clean way to deploy Enterprise applications; so what about if Databricks and Docker work together to create a Docker-based image to deploy Apache Spark quickly? But, beyond that, for all this, you need a strong base for this, and the recent release of Red Hat Enterprise Linux 7.1 is perfect for this; for its new Atomic Host offering, a version of the battle-tested Enterprise Linux distribution optimized to run Containers-based applications; combined with the Real-Time version, which could be a great complement for an Apache Spark cluster for its needs for low-latency response times for its distributed architecture. But, which benefits could bring to the world a collaboration among these great companies? Keep reading to find out.
After several weeks very busy in the job, I´ve had a short amount of time to write a post. In the past days, I have worked with Apache Solr, the de-facto Open Source platform for Enterprise Search applications, because I´m leading a new team for a new Search application for a client, and of course, the first reference is this incredible piece of software. I have to thank to Andy Wibbels (CMO) and Max Bunag (Sales Director – Southwest US, APAC and LATAM) from Lucidworks team, who helped me to find the right content about Solr’s scalability and performace tuning. In this “search”, I found the amazing talks in the past Lucene/Solr Revolution event, focused entirely in Lucene and Solr, and this post is about my favorite talks in this event.
I was reading an exceptional book some months ago called: “Think and Grow Rich” written by Napoleon Hill, where I learned a fundamental thing to command the course of your destiny: focus, determination and desire and I want to apply that to know where I want to work: After “thinking” for several days and months about this, looking for a company with enough potential to grow to become in a global success which is part of the Big Data revolution, is embracing and it’s part of the Cloud wave, and has a strong vision for the next 10 years; one name came to my mind: DataStax. You should be thinking that I’m wrong saying this openly, but for me it’s not the case: simply I’m focused, determinated and with a strong desire to success. If you have read Andy Rachleff “2013 Silicon Valley Career Guide” (Chairman at Wealthfront) to evaluate a company in the valley, you should know why I want to work at DataStax, declared by Andy like one of the Middle-Size companies with Momentum, but this is not the unique reason why I want to use all my efforts to work at DataStax. How I arrived to the conclusion that this company is the right one? The upcoming words are to explain my decision, how I researched the company, its current and prospective customers, how I understood its potential growth and more. I will describe this like the title of the post in facts, analyzing one by one each of these points.
Some weeks ago, I received an email from Jeff Barr (Chief Evangelist at Amazon Web Services) explaining the new features that brings the new Amazon Linux AMI, and when I finished to read the post in the AWS blog, I commented all the changes with my team about the impact that all features could carry in AWS Based Big Data Analytics platforms. The completed list of changes are here. When you begin to analyze all the improvements that are inside the Linux kernel 3.14.19 has, you should wonder how this release could make for High Performance Analytics platforms like Amazon Elastic MapReduce (EMR), or for your Amazon Redshift cluster, or your own Hadoop cluster on top of Amazon EC2 using this Linux AMI. I will comment some of my favorite features in this post. Keep reading
Some days ago, I was working with Pentaho Data Integration (known like Kettle) and in recent versions of Linux, the platform wasn’t initiate for a problem with libsoup2.4, which is a HTTP library implementation in C, and it’s used by Webkit libraries which come with Pentaho PDI. This platform uses libsoup2.4.1 to launch the quick reference site that explain some basic things about the Pentaho Business Intelligence, but for some reason, this site wasn’t been shown, and the launch process was been stopped every time that I tried to initiate the program. So, how to fix it? Keep reading.
You should be saying: “This guy is totally out of his mind” and I tell you: Perhaps, my friend, perhaps!!! I think that everyone needs a short moment of madness, and after that, think and make points in a better way. But seriously, in an era where financial markets are going up and down so quickly; every company should think in new ways to arrive to profitability, and putting some eggs in the right baskets could be very lucrative if it’s done in the right ways.
So, for that reason, I think that Oracle should follow the examples of Google (Google Ventures and Google Capital), Qualcomm (Qualcomm Ventures), SAP (SAP Ventures) and Intel (Intel Capital); and create its own Venture Capital firm. But a different kind of VC firm. How? Keep reading.