Opera Postpones Earnings Release As Takeover Rumors Swirl

opera-new-logo-brand-identity-portal-to-web-1024x644 Last Friday, the Oslo stock exchange decided to halt trading in Opera Software’s shares after rumors of a potential takeover by Chinese security firm Qihoo 360 appeared in a Norwegian newspaper. Trading in Opera shares is still suspended today (which is a pretty unusual move) and now Opera itself is adding fire to the takeover rumors by postponing its planned earnings call from… Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/3nSCek-gqPA/

Original article

Teaching Technology: Deans’ Roundtable at Tech Show

[NOTE: Please welcome guest blogger, Michael J. RobakAssociate Director/Director of Information Technologies, Leon E. Bloch Law Library, University of Missouri – Kansas City.]

This
year’s ABA Tech Show is from March 16 – 19, 2016. (
http://www.techshow.com/ )   It is also the 30th anniversary
of the Tech Show.
  This year, for the
first time, an academic specific event is going to be tied to the Tech
Show.
  The half day conference, on the
morning of March 16, 2016 is an opportunity for law school faculty and
administration, law students and practitioners to discuss the “how and what” of
teaching technology as well as develop a framework for adding an academic track
to the 2017 program.
  Law students are
particularly encouraged to attend the event and the show.
  Pricing for law student admission to the 3
day event is $100. (Registration link here:
http://www.techshow.com/pricing/ )
Below
is the program description – if you are planning to attend the ABA Tech show,
this will be a great way to start the event!

Teaching
Technology in the Academy:  Are we
finally at the Tipping Point?

A
Law School Roundtable discussion held in conjunction with the 2016 ABA Tech
Show

Hosted
by IIT-Chicago Kent School of Law

March
16, 2016

9:00
– 12 noon

No
charge for registration

Roundtable Description

2016
marks the 30th Anniversary of the ABA Tech show.  In 1986 the idea of “micro-computers” in law
practice, to quote Jeff Arresty, one of the show’s founders, “was at its
complete inception”.

Much
has changed in those 30 years when it comes to legal technology.  But law schools have not yet fully embraced
the importance of technology competency for law students.  Even though law schools have begun to bring
technology courses to the curriculum and to experiment with innovative concepts
like legal hackathons, much remains to be done. 

In
July, 2014 and again in April, 2015, the University of Missouri – Kansas City
hosted two conferences on Law Schools, Technology and Access to Justice.  These conferences were supported by the Ewing
Marion Kauffman Foundation and brought together academics, legal technologists
and members of the Access to Justice community. 
One of the stated goals of the conferences was to produce a specific
direction for the teaching of technology in law schools.  A set of principles, referred to informally
as the Kansas City Principles, were developed and state as follows:

Fundamental Principal
#1: 

In their role of
ensuring that the lawyers of tomorrow have the core competencies to provide
effective and efficient legal services, law schools have the responsibility to
provide all students with education and training to enable them to understand
the risks and benefits associated with current and developing technologies and
the ability to use those technologies appropriately.

Fundamental Principal
#2: 

In order for lawyers
to fulfill their professional obligations to advance the cause of justice, it
is essential that economically viable models for the delivery of legal services
be developed that allow all members of society to have access to competent
legal representation or effective self-representation regardless of income, and
law schools should assist in the development of technologically-supported legal
marketplaces that help identify available alternatives and, where legal
representation appears most appropriate, to empower those seeking the services
of a lawyer to identify and retain a competent lawyer of choice at reasonable
cost.

Fundamental Principal
#3: 

As part of their
responsibility to assist in providing access to law and justice, law schools
should use their legal knowledge and technological capabilities to make the law
more comprehensible and readily available to the public so as to empower people
to use the law and, where appropriate, lawyers, to improve the quality of their
lives, and should include in this endeavor, among other initiatives , working
with national, state, and local governments to provide the public with free
on-line access to statutes, regulations, cases and other primary law at all
levels of government.  

Fundamental Principal
#4: 

In order to encourage
community economic development and contribute to a strong global economy, law
schools should educate lawyers who can stimulate entrepreneurship and
innovation and assist in developing technology that can support economically
viable means of providing affordable legal services to small businesses, social
ventures and start-up enterprises.

Fundamental Principal
#5: 

Because technology has
the potential to reinvent the processes of law in ways that can help achieve
access to justice, law schools should encourage their students, faculty and
graduates to research, teach and implement non-traditional, technological
approaches to legal innovation that will maximize the ways in which individuals
and entities can achieve the benefits of law and legal process.

The explicit goal of this
half day event is to not only continue to drive the discussion that led to
these principles, but to develop an agenda for how to proceed, including how to
involve the ABA Law Practice Management Section and leverage the opportunity
provided by the ABA Tech Show.

In addition, there has never
been a better opportunity for practitioners to help influence law schools on
the best directions in which to proceed with technology training.  It is expected that the roundtable audience
will include not only members of the academy but also practitioners, law
students and vendor representatives, and the participation of all these
segments in the conversation will be beneficial to determining next steps.

Agenda

8:30 – registration

9:00 –
10:15 – Moderated Panel Discussion:

Meeting
Technology Competencies for the 21st Century lawyer: The Role for
Today’s Law Schools

     Moderator:            Dean
Ellen Suni – University of Missouri – Kansas City (UMKC) School of Law

     Panelists: Professor
Ronald W. Staudt          – IIT Chicago- Kent
School of Law

                        Professor Oliver
Goodenough      – Vermont Law School

                        Professor William
Henderson       – Indiana University
Maurer School of Law

                        Dean Andrew Perlman                   –
Suffolk Law School

10:15
– 10:30 – Break

10:30 –
12 noon – Discussion Forum

The
panel will lead a discussion with members of the audience to move toward
consensus regarding the next steps for advancing the teaching of technology in
law school and examining how the ABA Tech Show can be part of these efforts
going forward.

12
noon – boxed lunch


Original URL: http://feedproxy.google.com/~r/geeklawblog/~3/T7jZwQNriZ4/teaching-technology-deans-roundtable-at.html

Original article

Living with and building for the Amazon Echo

But as an engineer and product person, the Echo really whets my appetite because you can build apps for it. Well, they’re called “Skills.

Amazon have quietly been building a solid library of third-party skills. There’s now over 200 including new additions from Uber, Spotify and Dominos. And they’re clearly taking their new ecosystem seriously: on the developer/platform side there’s a new VP and a $100m Alexa Fund. On the consumer side there’s, well, a superbowl ad.

So to learn more, I built a Skill.

Here’s some observations from my experience:

Getting Started

Building voice interfaces is no easy task, let alone a framework for arbitrary commands for third-party apps — but the Alexa Skills Kit (as the programing interface is called) is an impressive bit of software.

As a developer you specify the ‘intents’ your Skill supports (think of these like controllers in an Rails app or Activities in an Android app), then specify the various phrases people might use to invoke that intent. You also specify any variables that you expect as part of the incantation. These can be standard types (dates, numbers, places), enums you specify, or arbitrary literals (not recommended, but sometimes necessary). Your code then gets passed clean structured data to act upon.

This programming model is flexible enough to make most things possible, but there’s a few limitations. It’d be great to have a programmatic way of updating enum of custom slot types — either via an API or have the Skills Kit read and cache the values from JSON served at a URL. It’d also like to see an expanded list of built-in types: it currently only support US cities, for example.

These nits aside, it’s clear a lot of work has gone into the ASK, and it’s super-easy to build pretty complex voice-driven interfaces really quickly.

Needs better support for asynchronous tasks

Not everything happens in an instant. Today, when you ask Alexa something, she can only reply with one block of speech. This works great if the Skill you’re interacting with has the answers ready in an instant, but that’s not always the case.

Imagine your Skill calls an API which takes 5 seconds to respond — not all that uncommon for complex operations. There’ll be an awkward 5-second pause after you pose the question before you hear a response. Granted, you know something’s happening as the Echo’s blue lights pulse in the meantime. But it’d be a much better experience if Alexa offered Skills the ability to respond immediately with something like “OK, let me look that up for you”, and then a few seconds later with the actual response.

A great use case is a hypothetical Lyft app. When you order a ride, it might take 10–60 seconds for real drivers in the real world to accept the job. In Lyft’s app, this latency is satisfied with a spinner. But to make this experience work on the Echo, a Skill needs to be able to reply instantly (“OK, let me get you a ride”), then keep you updated (“I’m still trying to connect you with a driver…”), before letting you know: “I got you a ride. Your car will arrive in 4 minutes”. That experience is not possible today and it desperately needs to be to enable a whole class of semi-asynchronous or long-running Skills.

Notifications Notification Notifications

Today, Alexa can only respond to commands you utter. There’s no way the Echo can notify you that something happened — you always have to ask. But events and alerts are critical agents in some of the most useful experiences.

Take that Lyft example again — wouldn’t it be useful if Alexa could tell you when your ride was one minute away? What if Alexa could let you know that the pizza you ordered had been dispatched, or for that matter, remind you that your latest Amazon order would be delivered sometime this afternoon? None of that’s possible today.

Now, I can totally understand why this isn’t in for the v1 — tasteful notifications are hard to get right — but the issues are all solvable. Access to notifications needs to be tightly controlled to prevent abuse, but Amazon already has a certification scheme in place for Skills. It’d also make sense to have a low per-Skill quota to prevent over-use. As a user, I’d also want to be able to set do-not-disturb periods to prevent interruptions.

I really hope the folks at Amazon are actively working on notifications right now. They’d dramatically expand the universe of what’s possible.

Access to long-form & streaming audio

Right now, Skills are able to play short (<30 sec) audio clips. This is really designed for audio branding — perhaps a sound trademark. But I can imagine whole classes of experiences that become possible if Skills are able to access live audio streams or play long files.

For example, I’d love to be able to ask Alexa to start streaming the sound from our baby monitor when we put our daughter to sleep. I’d love people to build Skills which access long-form audio content beyond podcasts — for example LBC’s back catalogue of programming stretching back nearly 10 years.

The built-in apps (TuneIn, Spotify, Pandora, Audible etc) are all able to play >30 sec audio files, and connect to live audio streams. It’d be great to see the same abilities made available to third-party Skills too.

Multiroom

Perhaps this is the ultimate first-world problem: I’d like my ambient voice-activated virtual assistant to be in every room of my home. Yes, I know, I’m lucky enough to have a home with enough distance between rooms so as to not be heard properly between them — let alone lucky enough to have an ambient voice-activated virtual assistant. But I’ve begun to expect — no, rely — on Alexa’s presence, so that I’m confused when I walk in to the bedroom and can’t verbally add diapers to our shopping list.

First, it’d be great if, in a multiple-Echo home, Alexa were smart enough that only the nearest device responded — like the Echo’s beam-forming mic on steroids. Though we only have one Echo, that’s probably not the case today.

It’d be even better if multiple Echos could work together. I’d love to be able to say, from the kitchen: “Alexa, play a lullaby in the Nursery”. Yes, I’m that good a dad.

A more natural invocation model for third-party Skills

While built-in apps like Amazon’s own or Pandora can be invoked with natural phrases like “Alexa, is it going to rain today?”, or “Alexa, play some Gregory Porter”, third-party Skills have a more rigid invocation format:

Alexa, ask Tube Status if there are any delays
Alexa, ask Automatic where my car is
Alexa, ask TV Shows when is American Idol on?
Alexa, ask|tell|open {skill name} to|for|about|if|whether {some command}

This results in some pretty awkward sentences, and the formal structure interrupts the illusion that you’re talking to a truly smart assistant. To really make Skills shine, Alexa needs to be clever enough to figure out what you’re asking, and delegate to the right Skill. The commands above should be as simple as:

Alexa, are there any delays on the Tube?
Alexa, where’s my car?
Alexa, when is American Idol on?

Now, again, I totally get why this is the state today — the formal structure makes it much easier for Alexa’s brain to invoke the right Skill and pass your command to it in a structured way. But we’re shooting for amazing here — and being able to invoke Skills using natural language and arbitrary sentence structure is critical to the illusion Alexa purveys.

Audio Out

The Echo is a really great little speaker — at least as good as the other Bluetooth speakers in its price range, and they just stream Bluetooth audio. But it’s not Hi-Fi. For me, the Echo is missing a line out jack that I can wire into a proper set of speakers to play back the streaming audio.

Of course, I could still use a laptop/phone/AirPlay to stream Spotify to my Hi-Fi but it’s testament to how awesome Alexa’s interaction model is that I want to use the Echo to control everything. Given that the current hardware doesn’t have an audio jack, a quick fix would be to let Alexa control another Spotify client — kind of like Spotify Connect in reverse. Or I could hack it


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/X1qzsVfVRf4/living-with-and-building-for-the-amazon-echo-525caea9f280

Original article

How We Monitor and Run Elasticsearch at Scale

SignalFx is known for monitoring modern infrastructure, consuming metrics from things like AWS or Docker or Cassandra, applying analytics in real time to that data, and enabling alerting that cuts down the noise. Core to how we do that is search. It’s not good enough to just process and store/retrieve data faster than anything out there, if it takes a long time for users to find the data they care about. So to match the speed of the SignalFlow analytics engine that sits at the core of SignalFx, we’ve been using Elasticsearch for our searching needs from day one.

In this post, we’ll go over some lessons learned from monitoring and alerting on Elasticsearch in production, at scale, in a demanding environment with very high performance expectations.

Why Elasticsearch

We’ve found Elasticsearch to be highly scalable, providing a great API, and very easy to work with for all our engineers. Ease of setup also makes it very accessible to developers without operational experience. Furthermore it’s built on Lucene, which we’ve found to be solid.

Since the launch of SignalFx in March of 2015, our Elasticsearch deployment has grown from 6 shards to 24 (plus replicas) spread over 72 machines holding many hundreds of millions of documents. And it’ll soon double to keep up with our continued growth.

Operating Elasticsearch

At SignalFx, every engineer or team that writes a service also operates that service — running upgrades, doing instrumentation, monitoring and alerting, establishing SLOs, performing maintenance, being on-call, maintaining runbooks, etc. Some of the challenges we face might be unique to our scale and how we use Elasticsearch, but some of them are universal to everyone who uses Elasticsearch.

Instrumentation: Collecting Metrics

Elasticsearch provides a fairly complete set of metrics for indexes, clusters, and nodes. We collect those metrics using collectd and the collectd-elasticsearch plugin. An important aspect of how SignalFx works and why people use it is what we call “dimensions”. These are basically metadata shipped in with metrics that you can aggregate, filter, or group by—for example: get the 90th percentile of latency for an API call response grouped by service and client type dimensions.

There were a few things we’ve added to the original collectd-elasticsearch plugin to take advantage of dimensions, which we’ll be submitting as PRs soon. Now you can track metrics per index and also get cluster-wide metrics. These are enabled by default in the plugin but can be switched on/off in the config.

If you use collectd and the collectd-elasticsearch plugin, SignalFx provides built-in dashboards displaying the metrics that we’ve found most useful when running Elasticsearch in production at the node, cluster, and cross-cluster levels. We’re looking at adding index level dashboards in the future.

CPU Load
Disk IOPs 
Deleted Docs %
Top Indexes by Search Requests
Memory Utilization 
Search requests / sec 
Active Merges 
Top Indexes by Indexing Requests
Heap Utilization 
Indexing requests / sec 
# Clusters 
Top Indexes by Query Latency
GC Time % 
Merges / sec 
# Nodes
Top Indexes by Index Growth
Avg Query Latency
File Descriptors 
Nodes / Cluster
Top Clusters by Search Requests
Requests / sec 
Segments 
Filter Cache Size
Top Clusters by Indexing Requests
Doc Growth Rate % 
Thread Pool Rejections   
Field Data Cache Size  
Top Clusters by Query Latency
 
 
 
Top Clusters by Index Growth

 

Built-in Elasticsearch Overview Dashboard in SignalFx

Built-in Elasticsearch Cluster Dashboard in SignalFx

Pre-built Elasticsearch Node Dashboard in SignalFx

As the infrastructure changes and nodes come in or out of service, all charts, dashboards, and alerts automatically take into account the changes and don’t have to be modified in any way.

Investigation: Cluster, Node, and Shard

With a large number of nodes, you have to figure out whether problems are cluster-wide or machine-specific. We used to frequently get threadpool full issues that were caused sometimes because of large numbers of pending requests and sometimes because a single node was slow and dragging down the performance for a whole batch of requests.

The process:

  1. First we look at cluster level metrics
  2. Then down to a node view, looking at the top 10 or top 5 for a particular raw metric or analytic (like variance) to isolate where the problem is
  3. Then a check to see whether the problem is on a subset of nodes that host the same shard or whether it’s a node specific problem

There are basically three scales of problems to contend with – cluster, shard, and node – and typically you have to look at all three. Some examples from the trenches:

  • We’ve had many performance issues that came down to a noisy neighbor, or network I/O bugginess, or other random problem with the AMI or VM underlying a given node on AWS
  • We used to see large spikes in memory consumption that caused problems during garbage collections. Eventually we started looking at individual caches and discovered that it was the field data cache, set cluster-wide but effective per-node as a configurable amount of heap used. So now we’ve moved to using doc values (disk-based field data introduced in Elasticsearch 1.0.0).
  • As part of handling document growth, we sometimes need to re-shard our index. When re-sharding, we need to re-index documents in batches. We’ve run into thread pool rejections on the indexing queue in the process. After isolating which nodes this was happening on, over a period of time, we saw that it was on three nodes (primary plus two replicas) representing a single shard. Rejections would spike on three machines, then come down and spike on another three machines (the next shard). It turned out that when we queried Elasticsearch to build the batch of documents to index, results were returned in shard order, causing significant load shard-by-shard. So we changed the query order to be more randomly distributed.

Alerting: More Signal Less Noise

At our scale, the amount of metrics emitting from Elasticsearch is huge. It’s impossible to look at the raw metrics and alert on them in any useful manner. So we’ve had to figure out derived metrics that are actually useful with alert conditions that don’t inundate on-call, applying SignalFx’s powerful SignalFlow analytics engine to do the math in real-time so we don’t miss any anomalies.

In one example we used to have checks on the state of every node, but the way Elasticsearch works — if the cluster becomes yellow or red, then all the machines in cluster get set yellow or red — meant 72 alerts, one per node. We’ve since switched to taking the cluster status reported by each host, assigning it a numerical value (0 for green, 1 for yellow, 2 for red) and alerting on the Max value. Now, using SignalFx, we only trigger a single alert when the cluster status gets set to yellow or red, limiting the noise. When all 1 node is yellow but all 72 instances report yellow we report on the max so you only get 1 alert (limit the noise).

Monitoring Elasticsearch Cluster State with SignalFx

Taking the max of cluster status

Alerting on Elasticsearch Cluster State Yellow with SignalFx

Alerting on cluster status yellow if it persists for 90 minutes or more

Alerting on Elasticsearch Cluster State Red with SignalFx

Alerting on cluster status red if it persists for 30 seconds or more

In another example, we know that Elasticsearch can recover a failed machine by restarting replicas on another node. We also know based on shard size and experience timing it, that recovery can take up to an hour and a half. We then use this to decide whether it makes sense to wake somebody up — by applying duration thresholds on alert conditions, it is triggered if any of these three conditions are true:

  1. The number of unallocated shards is non 0 for longer than 2 mins, the time it takes Elasticsearch to assign failed shards to other nodes
  2. The number of relocating shards is non 0 for longer than 90 mins, the time it takes to fully relocate a shard
  3. The number of Elasticsearch nodes that are down is 2 or more, higher risk of getting into the RED state

Putting all our experience over the last few years together, here’s what we’ve found most useful to alert on:

  • Spikes in thread pool rejection
  • Sustained memory utilization above 50%
  • Cluster status at yellow for longer than 90 mins
  • More than one node at a time goes out of service
  • Any master node goes out of service
  • Number of concurrent merges per node being higher than five for a sustained period — when this happens, Elasticsearch will start throttling indexing which causes index requests to stall or timeout.
  • Query latency variance — Elasticsearch exposes metrics for the sum of latencies (per node) since a node has come up and for the total number of queries. Dividing the two metrics give us average query latency for every node from it’s start time. But over time, tracking the average like over the lifetime of a node will smooth out any variance. So using SignalFlow’s timeshift capability, we take the difference in the average query latency on a per minute basis to look if any spikes are significant enough to move the average within any given minute per node and also by top cluster. See a snapshot of the analytics setup for this below.

Monitoring Elasticssearch Query Latency Min-Over-Min with SignalFx

SignalFx customer Symphony Commerce uses Elasticsearch at a similar scale to us, to power both search services for their application and also to power search against product catalogs for their customers’ customers. You can read about how they use SignalFx here.

“We’ve always relied on SignalFx for understanding problems with Elasticsearch. Before, all we knew was that some nodes had gone down or performance had slowed. With SignalFx, we’re able to monitor and alert on important metrics  like document growth and garbage collection time—as well as look back in time to see exactly when problems arose and in which part of our cluster.” –Stephen Bochinski, Full Stack Engineer, Symphony Commerce

Scaling and Capacity: When To Grow

Because of the way sharding works in Elasticsearch, we’ve found that scaling and capacity management have to be thought through clearly and treated as a proactive process. There are basically two ways to scale: add disk capacity to existing nodes or reshard to add more nodes. The first is low-risk and non-disruptive. Resharding is a complex process; doing it while the old index is being written to makes it even more complex. We’ve had to develop some methods of our own to make it work at SignalFx, where we we can’t afford to lose updates to metadata or not serve queries while resharding is in process. In addition, at our scale it is not a fast process—taking up to many days. There’s no getting around the physics of moving bits. You can read about how we do resharding and not only guarantee that no updates are lost, but also provide ways to pause and roll back the process.

The key metrics we track for capacity are document growth rate and storage usage. We track the percentage of growth in documents, percentage growth in storage consumption, absolute storage consumption, and top indexes by growth. We’ve found that storage consumption has to stay below around 50% in general and below 70% at all times. Going above 50% on a regular basis, or 70% at all, means that large merges can bring everything to a crawl and it’s time to scale.

Comparing document growth rates to storage growth rates and absolute storage consumption gives us an idea of when we’re going to have to reshard in the future, so we have enough runway to reshard before suffering performance problems.

Conclusion

We hope everyone who runs Elasticsearch and is trying to figure out what to monitor on, what to alert on, and how to scale their infrastructure will find this useful. If you’re interested in building and working on modern infrastructure software, join us — we’re hiring engineers for every part of SignalFx!

Stop by our booth at Elasticon to see how we monitor Elasticsearch (and everything else) in real life and get scanned for a chance to win a BB-8!


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/mFGYIxEIDWE/

Original article

Oracle issues an emergency patch to Java for Windows

Malware spy

Security problems are not new to Java, though it is, admittedly, not the only platform that suffers from these problems. Now Oracle has acknowledged a new hole and it is bad enough to issue an out of cycle emergency patch.

With the catchy name of CVE-2016-0603, the security flaw requires the user to access a malicious website and accept the download of Java version 6, 7 or 8 in order to become infected. However, for those who fall for it, the attack will allow for a total compromise of the system.

“Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later”, writes Eric Maurice of Oracle.

This is just the latest in a long line of patches from Oracle, a company that only recently had to issue 248 patches at once. The actual bug is not revealed so as to keep it away from potential malicious use. The good news in all of this is that an attack seems unlikely given the need to be lured to a particular site and then to download a version of Java that isn’t coming from Oracle.

Photo Credit: Balefi


Original URL: http://feeds.betanews.com/~r/bn/~3/P_6PgxWPjcE/

Original article

Sidebar Diagnostics is a stylish system monitor for your desktop

sidebarSidebar Diagnostics is a well-designed professional system monitor for Windows Vista and later.

No, we’re not usually interested in this kind of tool, either, but wait — this is a package you might actually want to use.

It’s an open source project, so there’s no adware or other hassles, and you don’t even have to install it — just unzip and go.

The sidebar appeared a few seconds later, and in a neat professional touch, a dialog prompts you to confirm it’s displayed correctly, and helps you fix any problems.

The default sidebar has all the key details you’d expect: CPU type, temperature and usage by core; RAM load, used and free; GPU details; network bandwidth usage per adapter; a very simple view of drive free space (a bar, no figures), and the date and time.

Normally we’d be concerned about the accuracy of these figures, but that’s not a significant issue here. Sidebar Diagnostics is using Open Hardware Monitor code to find all the necessary data.

This worked well for us, but if you’re unhappy with the defaults then there are plenty of tweaks available.

You’re able to customize the width, font, colors, opacity and more.

The sidebar can be displayed on the left or right side of the screen, moved to another monitor, set as “always on top”, or set to reserve space for itself (maximize other applications and they’ll leave the sidebar visible).

There are various hardware options. You’re able to display more or less details for individual modules, turn some off altogether, reorder the others, maybe set temperature alerts for your CPU and GPU.

And although this is easy enough to control from the mouse, or a system tray icon, there are also optional hotkeys that can be customized to suit your needs.

Put it all together and Sidebar Diagnostics is an excellent system monitor, stylish, highly configurable and easy to use. Go grab your copy immediately.

Sidebar Diagnostics is an open-source project for Windows Vista and later.


Original URL: http://feeds.betanews.com/~r/bn/~3/w-SDlY5gJMk/

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: