Star Wars: The Force Awakens Blu-ray leaks online early — available for torrent download

CD pirate

The Star Wars franchise is a runway train-like phenomenon, lasting decades and being enjoyed by several generations. When the newest such film, The Force Awakens, was announced there was a fury of media coverage and marketing cross-promotions. Quite frankly, I found myself sick of the Star Wars franchise after a while, although I did eventually see the film — it was pretty good.

Yesterday, a rip of the Retail Blu-ray version of Star Wars: The Force Awakens apparently made its way to the seedy underbelly of the Internet. Even though the disc is not available for purchase until April 5th, pirates can start watching immediately. This is rather sad, as honest fans of the franchise must wait to spend their money, while dastardly thieves can watch now for free.

According to Torrent Freak, in a mere 12 hours after being leaked to the net, there were already about 250,000 downloads. This should increase dramatically over time.

The lowest price I found for pre-ordering the Blu-ray disc is $19.99. If we round that up to $20, and multiply times 250,000 downloads, that equals $5 million dollars lost in the first 12 hours of leaking.

With that said, it is hard to predict actual lost sales, as pirates can still buy the disc when it is released. Surely there are fans of the science fiction franchise that still plan to buy, despite also pirating. It is understandable — though still illegal — that they just couldn’t wait.

Is it morally wrong to download a film even if you plan to buy it? Tell me in the comments.

Image Credit: Marcos Mesa Sam Wordley / Shutterstock


Original URL: http://feeds.betanews.com/~r/bn/~3/IbFd0J0YZS4/

Original article

How to access Windows 10’s secret Apps View

windows_10_bubble

How do you check to see what apps you have installed on your Windows 10 computer? Your first port of call might be the Start menu. While this might seem logical — there’s an All Apps entry, after all! — it’s not necessarily the best option.

If you’ve spent any time working in File Explorer, you’ll know that there are various views to choose from; different views suit different tasks. What you might not know is that there’s a secret Apps View which is great for not only showing you what you have installed, but also makes it easy to perform actions such as Start menu pinning with multiple apps.

Just as with the All Apps section of the Start menu, Apps View shows you not just the modern apps you have installed, but also the traditional applications. The reason you might not have been aware of the existence of this view is that it is not referenced in menus or tools bars — you need to know the right command to instigate it.

There are two ways to access this handy mode:

  1. The first option is to open up File Explorer and type Shell:AppsFolder in to the Path bar before hitting Enter.
  2. Alternatively, call up the Run dialog by pressing the Windows key and R at the same time, type Shell:AppsFolder, and press Enter.

apps_view

Right click any of the apps you see listed and you can then pin to the Start menu, pin to the taskbar, or uninstall. It’s a great deal easier than fiddling with the Start menu if you want to work on multiple shortcuts.

Give it a whirl!

Photo credit: Stanislaw Mikulski / Shutterstock


Original URL: http://feeds.betanews.com/~r/bn/~3/egwA9xNOy1g/

Original article

Why Microservices?

A Completely Biased Summary of the Assets of Microservice Architectures

by @GuidoSteinacker

When we started ‘Lhotse’, the project to replace the old, monolithic e-commerce platform of otto.de a few years ago, we chose self-contained systems (SCS) to implement the new shop: Instead of developing a single big monolithic application, we chose to vertically decompose the system by business domains (’search’, ’navigation’, ‘order’, …) into several mostly loosely coupled applications. Each application having it’s own UI, database, redundant data and so on. If you are more interested in the ‘how’ instead of the ‘why’, please have a look into an earlier article on monoliths and microservices.

Lately, some of these SCS turned out to be still too large, so we decomposed them by extracting several microservices. Because we are already running a distributed system, cutting applications into smaller pieces is now a rather easy exercise.  One of the reasons, why I agree with Stephan Tilkov that you should not start with a monolith, when your goal is a microservices architecture.

This article is not about the pros and cons of microservice architectures. This article is mostly about the pros. Not because they do not have downsides, but because I’m biased and completely convinced that microservices are a great idea.

The following sections might give you some insights, whether or not microservices are for you – but keep in mind, that I’m not working for a company, that has implemented microservice architectures in lots of different contexts (btw, there are not too many people in the world, that have this experience), but only in one: A really large online shop, developed as a greenfield project, having millions of customers, sometimes more than 1000 page impressions per second, almost 3 billion € turnover per year and more than a dozen dev(ops) teams continuously working on the software.

Very often, you do not really have the need for a ‘real’ microservice architecture but ‘only’ for some (more coarse-grained) self-contained systems.

In my opinion, the concept of SCS is far more important than microservices, because it solves similar problems while being much more pragmatic in many situations. In addition to this, SCS are a perfect starting point if you later decide to introduce microservices. Most of the advantages I will mention in the following are also applicable to SCS.

Efficiency & Speed

Every developer is able to start up a microservice and hit a breakpoint in a debugger on a notebook within seconds. Try this with a ten-year-old multiple-million LOC monolithic application and you know, what I’m talking about.

It’s not only debugging:  You also want to develop test-first or at least have a good test coverage, right? If you are trying to do so, you will notice that it is extremely helpful if you are able to build and test your application within seconds. The smaller the code size of your application, the better and easier.

You might argue that a proper build and project structure is good enough to execute unit tests frequently and fast enough. But even then you will probably (hopefully) want to have integration and acceptance tests. You could surely rely on nightly builds or tests only running on CI servers to work around these kind of problems – but in this case you may have to deal with constantly broken build pipelines and far longer cycle times during development. Microservices make it much easier to establish test pyramids that are quick enough to be executed by every developer before changes are pushed to the VCS.

Continuous integration, delivery or deployment have a number of great advantages compared to more traditional approaches to releasing software. Especially if you are working with multiple teams on a single system it is much easier to frequently deploy a number of small, independent applications instead of a single, big monolith. The reason should be easy to understand: If you only have one application, all teams must be able to deploy their part at the same time. If your system consists of multiple applications, it is natural that you will find more days (or hours), where at least one of it is in a state to be deployed into production.

The following chart depicts the number of live deployments per week over the last two years. At the beginning of 2015, we introduced our first microservices. Over the year, we developed more and more of them. In addition, we learned how to continuously deploy these services fully automated after every commit. By doing so, we increased the number of deployments from ~20 to more than 200.

Deployments per Week 2014-2016

The red line shows the number of incidents we internally classified as “priority 1” or “emergency”. As you can see (and against many people’s fears), there is no correlation between the number of deployments and the number of serious errors.

At otto.de, every application is implemented by exactly one team, and every application can be deployed independently because they are loosely coupled. Consumer-driven contract tests (CDCs) are very helpful to ensure that a deployment of one service does not break other services.

There is no need to coordinate deployments between applications anymore. Developers deploy software on their own, or deployments are fully automated. We are currently deploying multiple times a day, up to 250 times a week (continuously increasing). If you are able to deploy more or less every single commit into production within minutes, you are definitely gaining momentum. Every kind of development is accelerated. And the more often you deploy, the less risky your deployments become.

Also, interesting approaches like hypothesis-driven development are supported by quick develop-release-test-learn cycles. The shorter the time between development and learning, the quicker you will be able to learn from your mistakes – and successes.

However, the most important advantage of microservices with respect to efficiency & speed is that they are really easy to understand by developers in every detail. In contrast to a hundred-thousand lines of code (LOC) monolith, every new developer in a team is able to become productive within days or even hours. In my team, for example, it is very common for a new team member to deploy the first commit to live on the very first day (first-commit-to-live is a very interesting KPI, by the way).

Being able to understand an application in every detail is also important in every-day development: if you have to fix a problem or extend the application, you first have to understand what you have to do. The larger the application, the more time you will generally need to change it, without breaking something important. Other way around: no developer on this planet is able to fully understand a million-lines-of-code application. Ok – at least I am completely unable to do so.

Microservices, on the other side, will help you to quickly find out what part of the system has to be changed, and how to change it without breaking anything. Just because microservices are small enough to be understood by people like me.

Motivation

Do you remember the last time you had the possibility to build a new system from scratch? For many developers, greenfield projects feel like heaven:  You do not have to mess around with ‘legacy’ software, and you are able to use up to date solutions. You can develop a clean design and you are not restricted by old botched up code.

It’s fun. It’s motivating. You can try out new stuff. It’s far more productive. And – because you are constantly improving your development skills – the new microservice will probably be better than what you have build before.

Because microservice are – well – micro, you will be able to have these feelings every few weeks or months. And it’s not only the motivation because of building something new: it is also very interesting because every microservice is a part of a big, interesting architecture. And it is very motivating to solve problems every day in an very efficient way, without having to dig around years old crappy code.

Why should you care about motivation of developers? What’s so bad about having them sift through legacy code? Well, in case you did’t yet realize: Software development is an employee market. Great developers are not applying for a job in your company – your company has to apply for every single great developer. If you intend to get (and keep) really good developers – you have to do something. If you are frustrating them with boring stuff – well, good luck.

Scalability

Ok, I will keep this part short even if this is one of the most important topics, because I’ve already covered this in some  earlier articles [german].

     Technical Scalability

If you need to scale things (like software), you need to decouple these things according to different (the more, the better) dimensions:

You could scale by cloning same things (aka load balancing): For example you could clone one application multiple times and balance load across these clones.

Load Balancing

Load Balancing

You could scale by grouping similar things (aka sharding): For example, you could have multiple applications being responsible for different data sets, or you could have multiple data centers being responsible for different user groups.

Sharding

Load Balancing

You could scale by separating different things:  A system can be decomposed into multiple collaborating applications. This can be achieved by vertical decomposition (multiple mostly independent applications or logical units) and/or by splitting also horizontally (distributed computing).

Vertical Decomposition

Vertical Decomposition

Distributed Computing

Distributed Computing

A perfectly scalable system is using all three (in my opinion four) dimensions: it consists of multiple logical “verticals”, each possibly consisting of multiple microservices. Every microservice may have one to many clones, so the load can be distributed over multiple processes. Sharding is used to distribute different kind of requests to multiple data centers or clusters using geo location or other shard keys.

Sharding + Vertical Decomposition + Distributed Computing + Load Balancing

Load Balancing

In a microservice architecture, you already have two dimensions of scale: load balancing and distributed computing. This enables you to scale single parts of your system independently. In an online shop, for example, you would possibly have a few instances of a shopping-cart microservice, but more instances of a search microservice, because your customers will most probably search more often for products, than they are buying products.

The more services you have, the easier it is to scale single things independently.

     Organizational Scalability

Especially in large systems, there is often the need to not only scale with respect to load or data: You will most likely also want to be able to scale with respect to the number of developers and other people.

If you need to have more than a few development teams, microservices will help you to scale your organization: Different teams can be responsible for one or more services. Multiple services and/or teams can be grouped by feature domains or other criteria.

The more services you have, the easier it is to introduce a new team. Even more important: development teams can be kept small, autonomous and efficient.

     Application complexity

Compared to monolithic architectures, the complexity of a single service is extremely low. Every single developer is able to fully understand a microservice in every detail.

Agreed, the overall complexity of a microservice architecture is high: but especially in the beginning, new team members will most likely concentrate on one or a few services. This is extremely helpful to get new developers productive on short notice. The interesting part is, that the high cohesion/loose coupling stuff is very helpful for everybody to understand, what a change to the software will do.

Because microservices are so small, there is almost no need to introduce a highly sophisticated internal architecture without too many abstractions, layers, modules and frameworks. Small services can be implemented simple and straightforward.

Just like classes, interfaces and namespaces do, microservices and verticals help you to structure your system. And as they are processes, it is at least difficult to (un-)intentionally bypass their APIs. This is why microservices will also help you to scale your application (or system) with respect to it’s size and complexity.

Chances & Risks

In a monolithic architecture, it is very risky to rely on cutting-edge languages and frameworks. Today, using Java might still be ok, but it is already getting difficult to get ‘great developers’ if you have to stick to only this language. Many developers like modern languages. They like new technology. They most often dislike stuff that was modern ten years ago… But it’s not only the motivation of developers. It is much more important to be keep your  software up to date. Software is the DNA of modern companies and it is very expensive to develop this stuff.

Microservices give you the opportunity to use the ‘best’ tech-stack for every single use case. If you are choosing something that turns out to be not-so-perfect, it is easy, cheap and almost risk-free to replace a microservice by a better solution.

You can as well try out new languages, frameworks or new (micro-) architectures. Not because it’s fun, but because you / your developers can learn from this experience – you will become a better developer.

Such kind of learning experiences is something completely different than any “hobby project” a developer can do in his/her spare time. Learning from code in production is the real thing.

Automation & Quality

One point about microserivces that seems to be a disadvantage in the beginning: You are forced to automate everything. Setting up new services, configuring services, firewalls and load balancers, deploying services, updating services, setting up databases, and so on, and so on. If you need to do such kind of things for every single microservice, people will go mad. You definitely need to automate such kind of tasks.

While initially being a hurdle, you will really take profit from automation. Your system will become much more robust, configuration errors will decline and manual work will be reduced to a minimum.

In my team, for example, all microservices are continuously deployed into production after every single commit (I’m partly lying: the oldest and biggest service is still different. But this one is not micro – and we are working on it). The deployment pipelines of the microservices are fully automated, without a single manual step. Not only this: also setting up and configuring a new microservice is automated. Setting up a deployment pipeline, too. Updating common dependencies to 3rd-party libraries is scripted, and so on.

The effort to let software do every-days tasks is initially high – but it’s very rewarding. Currently it takes us maybe half an hour to create a new microservice including it’s pipeline and bring it into production.

We are not managing bugs anymore: we are simply fixing them. If we have introduced a more complicated bug, we simply switch back to the previous version – and we mostly know, which commit introduced the bug. Now, our QA people are not managing bugs anymore, they do not have to test “releases”, they do not even write regressions tests on their own (because developers do): Instead of this, they now have the time to bring in their expertise earlier into the process. They are helping product owners to write acceptance tests and they are able to pair with developers. More generally said, their primary objective is shifting from hunting bugs to preventing bugs.

Fault Tolerance

Microservices should be responsible for a single feature (or a few highly cohesive features). If a microservice is crashing, only one single feature is affected. Of course, it is not really easy to develop a system of microservices that is resilient to failures. But cutting a system into independent applications is at least very helpful, because processes are great failure units (aka bulkheads). Not only because today’s operating systems are very robust against crashes of single processes, but also because developers are forced to access components using their exposed APIs.

In a monolithic application it is very tempting and easy to bypass (or break) a module API and directly access internals of the module. Over time, even the best design will erode because of such shortcuts: accidental ones, and intentional ones (I’ll fix it later…). From the perspective of fault tolerance, this may quickly become a real challenge, because after some time, it is extremely hard to understand, what kind of problems will cause what kind of errors. The more tangled your code is, the more likely it becomes, that some error will be able to tear down your whole system.

Using microservices instead of monolithic applications is very helpful to keep the loose coupling of your features. New and/or poorly implemented features will mostly have no impact on other features because they will run in a separate process, possibly on a different machine, using a separate database and code base.

There is no such thing as a free lunch. Only because you are choosing a microservice architecture, you will not get fault tolerance. But microservices will help you to implement a resilient system – and it will help you to stay this way over years.

Sustainability

This topic is – at least in my understanding – the most important one. Most of us forget that today’s most up-to-date systems will be tomorrow’s legacy applications. The stuff that you or people after you will swear about. Software that might prevent your company to survive future challenges.

Monoliths are bad design – and you know it. If you are building some important application today as a monolith, some day you will inevitably have to replace your monolith by something new. Maybe in two years? Ten years? Thirty years? Some weeks ago, OTTO has replaced its stone-aged host system. Not because it was not working anymore, but primarily because it was almost impossible to find developers willing to develop a decades-old application – and because the existing developers were retiring one after another. Replacing this system was definitely no fun. The project took years, it took a lot of money and it was highly risky.

In the last two decades, companies have build thousands of monolithic applications using J2EE and similar (already today partly outdated) technologies. Sometimes hundreds in a single company. How big do you think is the chance that these applications will still be adequate in ten or twenty years from now? In a few years, the todays mediocre solutions will be much worse than any host system from the seventies. Companies will go out of business just because of the software, that we are still developing today.

Some months ago Oliver (my former boss) said something very optimistic about the current otto.de platform: “we will never again have a project to reimplement otto.de“. He was probably right: While we might have to replace every single bit of the current software in the following ten-or-so years, we may have no need to have an incredibly risky project to replace everything in a single step. Maybe, we will see dozens of larger efforts to replace parts of it. But we should not face a need to replace really large parts of it.

In the first place, we are building software to solve today’s problems. Sometimes, we are also able to build software that is the foundation to solve tomorrows problems. But if software really is the DNA of your company, and if your business will have to change as rapidly as it is today – then you definitely need to find a software architecture that is sustainable over years and decades.

In my opinion, this is by far the most important aspect of microservice architectures: If you get it right, you have the chance to build a system that will never be the reason why your company will be unable to adapt to changing requirements. You will have to continuously work on your software – but you are able to do so in relatively small, affordable and risk-free steps.

Gefällt mir:

Gefällt mir Lade…

Verwandt


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/wpuJLVkrR0Q/

Original article

High-Availability Storage with GlusterFS on Debian 8 – Mirror across two storage servers

 HowToForge: This tutorial shows how to set up a high-availability storage with two storage servers (Debian Jessie) that uses GlusterFS.


Original URL: http://feedproxy.google.com/~r/linuxtoday/linux/~3/CW_USFulOFo/high-availability-storage-with-glusterfs-on-debian-8-mirror-across-two-storage-servers-160323072821.html

Original article

GNOME 3.20 Officially Released

prisoninmate writes: After yet another six months of hard work, the highly anticipated GNOME 3.20 desktop environment for GNU/Linux operating systems has been officially released on March 23, 2016. Release highlights include support for operating system upgrades via GNOME Software, middle-click paste, kinetic scrolling, drag-and-drop support for Wayland, keyboard shortcuts and gestures overlay for most of the core apps, XDG-Apps technology for installing multiple versions of an app, and much more goodies.


Share on Google+

Read more of this story at Slashdot.


Original URL: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/u8XoIeJrKB8/gnome-320-officially-released

Original article

Google opens access to its speech recognition API

Google is planning to compete with Nuance and other voice recognition companies head on by opening up its speech recognition API to third-party developers. To attract developers, the app will be free at launch with pricing to be introduced at a later date.

We’d been hearing murmurs about this service developing for weeks now. The company formally announced the service today during its NEXT cloud user conference, where it also unveiled a raft of other machine learning developments and updates, most significantly a new machine learning platform.

The Google Cloud Speech API, which will cover over 80 languages and will work with any application in real-time streaming or batch mode, will offer full set of APIs for applications to “see, hear and translate,” Google says. It is based on the same neural network tech that powers Google’s voice search in the Google app and voice typing in Google’s Keyboard. There are some other interesting features, such as working in noisy environments and in real-time.

Google’s move will have a large impact on the industry as a whole — and particularly on Nuance, the company long thought of as offering the best voice recognition capabilities in the business, and most certainly the biggest offering such services. A number of Nuance customers, including startups, could leave it in favor of Google’s technology, which not only offers an improved experience over current providers, but will also be made available at a lower cost.

To attract developer interest initially, the API will be completely free to use. Over time, the API will be paid, but likely have low-cost pricing tiers, we understand. Google may choose to raise those prices over time, after it becomes the dominant player in the industry.

Google has offered limited access to its voice technology in its products to date. Developers can make JavaScript calls to the Chrome API, for example, which are then routed to the speech recognition API. And Google announced a Voice Interaction API at Google I/O in 2015, which allows Android developers to add voice interactions to their apps. But Google had yet to open up access to the speech recognition API directly.

The introduction of the speech API won’t only impact Nuance and other speech recognition providers, but is also being seen as an attack on Apple, whose virtual assistant Siri’s voice recognition capabilities pale in comparison to Google’s. It’s also yet to offer an API for developers to use the Siri tech in their own apps.

There were hints that Google would be putting a greater emphasis on its voice technology and many use cases. For example, the company announced in February that it would allow Google Docs users to edit and format their documents by voice.

More to come as we hear about it at the event.

Featured Image: Brian A Jackson/Shutterstock


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/bSoiq2MF0W0/

Original article

​Red Hat becomes first open-source company to make $2B

Just think: Some people still don’t believe that you can make money from Linux and open-source software. Fools! Red Hat just became the first open-source company to make a cool 2 billion bucks.

Not bad considering Red Hat became the first billion dollar Linux company only four years ago. That may not be unicorn country, but unlike many billion dollar valuation companies, Red Hat did it the old-fashioned way: They earned the money instead of playing upon the gullibility of venture capitalists.

Red Hat’s total revenue for its fourth quarter was $544 million. That’s up 17 percent in US dollars year-over-year, or 21 percent measured in constant currency. Subscription revenue for the quarter was $480 million, up 18 percent in US dollars year-over-year, or 22 percent measured in constant currency. Subscription revenue in the quarter was 88 percent of total revenue.

Analysts estimated Red Hat would make $534 million.

Jim Whitehurst, Red Hat’s president and CEO credits “Enterprises increasingly adopting hybrid cloud infrastructures and open source technologies” for driving the company’s strong results. Whitehurst continued, “Customers are demanding technologies that modernize the development, deployment and life-cycle management of applications across hybrid cloud environments. Many are relying on Red Hat to provide both the infrastructure and the application development platforms to run their enterprise applications consistently and reliably across physical, virtual, private cloud and public cloud environments.”

For the full 2016 fiscal year, Red Hat’s total revenue was $2.05 billion, up 15 percent in US dollars year-over-year, or 21 percent measured in constant currency. Subscription revenue for the full fiscal year was $1.8 billion, up 16 percent in US dollars year-over-year, or 22 percent measured in constant currency. Subscription revenue in the full fiscal year was 88 percent of total revenue.

While Whitehurst credited the cloud, Red Hat’s subscription revenue can largely be ascribed to Red Hat’s flagship product: Red Hat Enterprise Linux. Still, RHEL, which is now available on Microsoft Azure, is becoming a prominent cloud operating system.

Subscription revenue from infrastructure-related offerings for the quarter was $391 million, an increase of 15 percent in US dollars year-over-year and 18 percent measured in constant currency. Subscription revenue from application development-related and other emerging technologies offerings for the quarter was $89 million, an increase of 38 percent in US dollars year-over-year and 43 percent measured in constant currency.

Full fiscal year subscription revenue from infrastructure-related offerings was $1.48 billion, an increase of 12 percent in US dollars year-over-year and 18 percent measured in constant currency. Full fiscal year subscription revenue from application development-related and other emerging technologies offerings was $323 million, an increase of 37 percent in US dollars year-over-year and 46 percent measured in constant currency.

GAAP net income for the quarter was $53 million, or $0.29 per diluted share, compared with $48 million, or $0.26 per diluted share, in the year-ago quarter. After adjustments non-GAAP net income for the quarter was $97 million, or $0.52 per diluted share, as compared to $81 million, or $0.43 per diluted share, in the year-ago quarter.

Looking ahead for its 2016 FY Red Hat expects to see between $2.380 billion to $2.420 billion. At this rate, Red Hat should easily become the first $3 billion open-source company.

Related Stories:


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/JF3jrQQqceM/

Original article

Tensorflow Implementation of Deep Visual Analogy-Making

README.md

Tensorflow implementation of Deep Visual Analogy-Making. The matlab code of the paper can be found here.

model

This implementation contains a deep network trained end-to-end to perform visual analogy making with

  1. Fully connected encoder & decoder networks
  2. Analogy transformations by vector addition and deep networks (vector multiplication is not implemented)
  3. Regularizer for manifold traversal transformations

This implementation conatins:

  1. Analogy transformations of shape dataset
    • with objective for vector-addition-based analogies (L_add)
    • with objective for multiple fully connected layers (L_deep)
    • with manifold traversal transformations

Prerequisites

Usage

First, you need to download the dataset with:

$ ./download.sh

To train a model with shape dataset:

$ python main.py --dataset shape --is_train True

To test a model with shape dataset:

$ python main.py --dataset shape 

Results

Result of analogy transformations of shape dataset with fully connected layers (L_deep) after 1 day of training.

From top to bottom for each : Reference, output, query, target, prediction, manifold prediction after 2 steps, and manifold prediction after 3 steps.

training in progress

training in progress

training in progress

training in progress

(in progress)

Training details

training in progress

Reference

Author

Taehoon Kim / @carpedm20


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/W8UmD9-8cBg/visual-analogy-tensorflow

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: