California DOJ’s OpenJustice Platform Makes Local Law Enforcement Data More Transparent

Recreation_of_Martin_Luther_King's_Cell_in_Birmingham_Jail_-_National_Civil_Rights_Museum_-_Downtown_Memphis_-_Tennessee_-_USA The murders of unarmed African-American people at the hands of police officers has created a lot of reasonable doubt in our criminal justice system. OpenJustice, an interactive web platform led by the California Department of Justice, has today released a new set of criminal justice data for the sake of transparency and accountability. Read More


Original URL: http://feedproxy.google.com/~r/Techcrunch/~3/FyugzN2s3j0/

Original article

The fall and rise and rise of chat networks

At the end of October 2014, something very important came to an end. After 15 years of changing the way people communicated forever, Microsoft closed down its MSN Windows Live service.

Originally named MSN Messenger, its demise was not an overnight failure. Microsoft’s acquisition of Skype for £5.1 billion in 2012 meant it was only a matter of time before it was finally closed. China was the last territory to migrate the service to Skype; other countries did so 12 months earlier.

At its height, MSN Messenger had more than 330 million users after originally being launched to rival the emerging chat networks of AOL’s AIM service and ICQ, followed by the entry of Yahoo Messenger. It was the social network of its day and as influential and dominant as Facebook is today.

Enlarge / MSN Windows Live Messenger for Passport—the IM client that we all loved to hate.

The closure in October 2014 followed that of AOL’s Instant Messenger, which quietly axed its chat rooms in 2010. Two years later, Yahoo Messenger followed suit and closed its public chat service in 2012, explaining only that it was no longer a “core Yahoo product.” More pointedly, the ubiquitous use of the mobile phone and messaging relevant to that more immediate platform had made them redundant.

The cessation of these networks signalled a nominal end to the first wave of chat networks before the tsunami of chat ingénues Snapchat and WhatsApp swept over them. While early chat networks went from thousands to millions of users, these chat networks have billions of users… and are worth billions of dollars.

WhatsApp was acquired by Facebook (who else?) for $19 billion (£13.5 billion) in 2013, and a $500 million (£350 million) investment in Snapchat in 2015 now values the company at more than $20 billion (£14 billion). More recently, the success of these giants has driven a new, third wave of chat networks, where emerging companies are offering all forms of niche content to attract and retain new users—but more on that later.

First, let’s go back to the start and see how chat technology emerged to become the most important human connector of the age and how the rise of chat networks began.

Talkomatic

Enlarge / This is what Talkomatic, the world’s first multi-user chat room, looked like back in 1973.

Like many digital technologies, multi-user chat started life in an American university and developed in a remarkably similar way to that of the Internet. In this case, the world’s first chat network happened in 1973 with Talkomatic, which was based on PLATO, a computer-based education program in Chicago’s University of Illinois.

It was primitive at its inception—Talkomatic had six channels, and only five people could chat at the same time—but what started as something for use in the classroom quickly became something for use outside of school; a place to chat with friends in a safe and personal environment. Sound familiar?

Talkomatic would continue to grow slowly over the next decade, but it was in 1980 when the emergent ISP CompuServe released its commercial CB Simulator to the general public that chat networks exploded into talkative life.

The CB prefix was important, because it represented citizens band radio, a technology that had earlier reached its apogee when the 1975 novelty song Convoy reached No 1 in the US, based on the worldwide fad for CB radio. This song by C. W. McCall was a three-way conversation between US truckers using CB radio and CB slang to create a narrative where users of the technology were able to undermine society’s mores, rather like the hackers of today. “We’re about to go hunting bear” (bear meant “police”) was as popular a catchphrase in 1975 as any around in 2015.

Enlarge / CompuServe’s CB Simulator, which launched way back in 1980, was a rather simple affair. This screenshot is probably running on either a Commodore or an MS-DOS PC.

Many chat pioneers liked to think of themselves as subversive, and the CompuServe CB simulator appealed to their outsider status. Like CB radio it had 40 “channels,” and its similar CB nomenclature such as “squelch” and “monitor” only underscored this connection.

CompuServe CB was hugely successful, and other companies built on the shoulders of its gigantism when AOL acquired CompuServe in 1998 and used an updated chat network as one of its features to encourage Americans to buy dial-up subscriptions.

The company had launched Instant Messenger a year earlier and within 12 months had 19,000 chatrooms. What was once an almost insurrectionary network had gone mainstream. Over the next 15 years it would become a locked-in technology with further evolutions via the respective networks of Friendster and MySpace.

You call that a chat network?

So, that was the past. What’s next? An October 2015 report, Connected Life, from market research consultancy TNS, polled 60,000 Internet users in 50 markets and revealed the sharp rise in instant messaging (IM) usage. More than half the planet’s population (55 percent) is using chat networks every day on platforms such as WhatsApp, Snapchat, Viber, and Line.

This trend is being led strongly by Asia, where daily usage jumps to 69 percent in China and 73 percent in Hong Kong. Chat networks are particularly dominant in emerging “mobile-first” markets, with daily usage rising in Brazil (73 percent), Malaysia (77 percent), and South Africa (64 percent). By contrast, some Western markets are lagging behind, including the UK (39 percent) and the US (35 percent).

“Apps such as Snapchat, WeChat, Line, and WhatsApp are sweeping up new users every day, particularly younger consumers who want to share experiences with a smaller, specific group, rather than using public mainstream platforms such as Facebook or Twitter,” said Joseph Webb, global director of the Connected Life report. “As people’s online and mobile habits become ever more fragmented, companies need to tap into the growing popularity of IM and other emerging platforms. The need for a content-driven approach across IM, social and traditional channels has never been clearer.”

Enlarge / In China, you can take out a microloan from within the QQ messaging app, and WeChat is coming soon.

With more than 606 million unique global users, one of the fastest rising networks is Viber, which gives its users the ability to connect in the way that works best for them for free, whether that is through individual or group text messaging, video and voice calls, or stickers.

Viber Games features mobile games for users to play socially against one another, using the Viber platform to send game invites, see what their friends are playing, and brag about their scores. It also features Public Chats, which allows its users to follow brands, celebrities, media, and entertainment content.

“Traditional social networks centre on the idea of users ‘broadcasting themselves’ to anyone who will listen. The recent growth in messaging app use reflects mobile users’ interest in real conversations with closed networks of friends and family,” Viber CMO Mark Hardy told Ars. “With mobile now the primary screen, chat apps will continue to develop as leaders in the app space, expanding their services and functionality to act as a hub platform aggregating multiple mobile experiences. Now is the time for the chat apps to fully engage the huge audiences they have built.”

Gamification… of chat?

Viber recently announced a $9 million (£6.3 million) acquisition of social gaming startup Nextpeer, the creators of a system that allows games developers to easily incorporate social gaming features into their apps. Such a social gaming feature is a huge incentive for chat network users to remain loyal to their network of choice and also one that drives user retention and acquisition.

Many of these new chat networks like to offer a games element, but one company, London-based Palringo, is offering a more innovative approach when it comes to games.

Palringo’s app has been downloaded more than 40 million times and not only offers in-chat games across more than 350,000 chat groups, it also publishes games on the main app stores to leverage its 40 million installed user base.

Some of these 350,000 chat groups have more than 2,000 members, and over the past 18 months the company has acquired mobile and social games developer companies in Finland and Sweden to bolster its games offering.

Enlarge / We don’t talk about it much on Ars, but simple mobile games like Balloony Land are where a growing percentage of gaming profits are being made.

Its latest published game, Balloony Land, has been hugely popular on both the iOS and Android app stores, and, when prompted, players can click on the Palringo button within the game to further immerse themselves in the Palringo community.

Upcoming games from the company include Eternal Enemies, a clan-based game where users can play as Ninjas or Pirates and wage war against each other. Palringo then acts as a social and strategic hub where clan members share intelligence, pick targets, and unlock additional content in the game.

Palringo’s model appears to be working very nicely. Recently nominated for two categories in the annual Meffys awards, the company also came 7th in the Sunday Times Fast Track 100, a league table showing the 100 fastest-growing companies in the UK. This position was based on annual revenues of $14 million (£9.7 million), more than double those of 2013. On these figures, 85 percent of revenue was from games with a very steady profit margin of 50 percent.

“We’ve been in existence in different iterations since 2007, and it became increasingly clear two years ago that we should build a messaging-based business. Moreover, our data showed that our users wanted more than communication, they wanted entertainment,” Palringo CEO Tim Rea told Ars.

“A lot of our customers were also on our network because if was fun to communicate with people they didn’t already know, but could come to know. That is people, who were into the same type of things, and not their existing base of people they did know.”


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/CMM8SGlfcmI/

Original article

Why I stopped using StartSSL (Hint: it involves a Chinese company)

TL;DR: The PKI plateform of StartSSL, an Israeli leader of free SSL certificates, is now hosted by Qihoo 360, a Chinese Antivirus Company, which uses IPs from a Chinese state-owned telecommunication company.

StartSSL is PKI solution from StartCom, a company based in Israel.

From https://en.wikipedia.org/wiki/StartCom:

StartSSL offers the free (for personal use) Class 1 X.509 SSL certificate "StartSSL Free", which works for webservers (SSL/TLS) as well as for E-mail encryption (S/MIME). It also offers Class 2 and 3 certificates as well as Extended Validation Certificates. All major browsers include support for StartSSL certificates.

StartSSL announced in December 2015 that it will expand activities in China:

From https://www.startssl.com/NewsDetails:

StartCom, a leading global Certificate Authority (CA) and provider of trusted identity and authentication services, launched its newly designed website just at the end of the year and announces expansion if its activities in China.

Mapping of StartSSL public infrastructure

StartSSL uses https://auth.startssl.com/ for the front-end to access to their PKIs (login to the PKI, create, revoke certificates…). It’s the Core of their service and the critical part of their infrastructure.

Using Robtex, we discover the platform of StartSSL is mainly operated in Israel with the 192.116.242.0/24 IP range (netname: SrartCom-Ltd(sic!), with country: IL).

From https://www.robtex.com/route/192.116.242.0-24.html:

The www.startssl.com vhost is provided by a custom CDN:

root@kali:~/# host www.startssl.com
www.startssl.com has address 97.74.232.97    <- Godaddy
www.startssl.com has address 52.7.55.170     <- Amazon Web Services
www.startssl.com has address 52.21.57.183    <- Amazon Web Services
www.startssl.com has address 52.0.114.134    <- Amazon Web Services
www.startssl.com has address 50.62.56.98     <- Godaddy
www.startssl.com has address 104.192.110.222 <- QiHU 360 Inc.
www.startssl.com has address 50.62.133.237   <- Godaddy
root@kali:~/#

Apart from IPs from CDNs, we find a strange fact:

The DNS of auth.startssl.com changed in December 2015 from 192.116.242.27 (StrartCom-Ltd) to 104.192.110.222 (QiHU 360), which belongs to a Chinese Company (Qihoo 360).

There are only 3 vhosts pointing to 104.192.110.222 :

www.startssl.com resolves for 1 IP to 104.192.110.222
auth.startssl.com -> 104.192.110.222
www.startpki.com -> 104.192.110.222

We can use WhatsMyDNS to check that auth.startssl.com revolves to 104.192.110.222 from any location. This is not a CDN solution but an intentional usage of a single Chinese IP.

https://www.whatsmydns.net/#A/auth.startssl.com

Whois information for 104.192.110.222:

From https://whois.arin.net/rest/net/NET-104-192-110-0-1/pft?s=104.192.110.222:

As auth.startssl.com revolves to 104.192.110.222 from any location, we can assume the PKI is now hosted on the 104.192.110.222 IP.

104.192.110.222 is an IP from “QiHU 360 Inc”, which actually means Qihoo 360. Qihoo 360 is a Chinese tech company.

You may be heard something about Qihoo 360, who just bought Opera.
Strangely enough, Qihoo 360 uses IPs from China Telecom Americas. China Telecom Americas is a subsidiary of China Telecom Corporation Limited which is a Chinese state-owned telecommunication company. It is the largest fixed-line service and the third largest mobile telecommunication provider in the People’s Republic of China.

It is worrying that the PKI front-end (auth.startssl.com) is now hosted within a Chinese Antivirus Company, who uses a Chinese ISP for 2 months AND that there hasn’t been any news around. It can be only linked to the expansion of StartSSL’s activities in China in December 2015, as explained above.

From a history point of view, StartSSL already refused to revoke certificates affected by the HeartBleed vulnerability and accused the user from negligence (“your software was vulnerable”).

With all these facts, I don’t think using StartSSL is a good idea now, except if they offer a clear explanation why they are hosting their PKI in a Chinese company.

Go use Let’s encrypt ! 🙂

published on 2016-02-16 00:00:00 by Pierre Kim


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/vYwydne6d3g/2016-02-16-why-i-stopped-using-startssl-because-of-qihoo-360.html

Original article

Moore’s law really is dead this time

Gordon Moore’s original graph, showing projected transistor counts, long before the term “Moore’s law” was coined.

Intel

Moore’s law has died at the age of 51 after an extended illness.

In 1965, Intel co-founder Gordon Moore made an observation that the number of components in integrated circuits was doubling every 12 months or so. Moreover, as this site wrote extensively about in 2003, that the number of transistors per chip that resulted in the lowest price per transistor was doubling every 12 months. In 1965, this meant that 50 transistors per chip offered the lowest per-transistor cost; Moore predicted that by 1970, this would rise to 1,000 components per chip, and that the price per transistor would drop by 90 percent.

With a little more data and some simplification, this observation became “Moore’s law”: the number of transistors per chip would double every 12 months.

Gordon Moore’s observation was not driven by any particular scientific or engineering necessity. It was a reflection on just how things happened to turn out. The silicon chip industry took note and started using it not merely as a descriptive, predictive observation, but as a prescriptive, positive law: a target that the entire industry should hit.

Hitting this target didn’t happen by accident. Building a silicon chip is a complex process, and it uses machinery, software, and raw materials that are sourced from a number of different companies. To ensure that all the different players are aligned and working on compatible timetables to preserve Moore’s law, the industry has published roadmaps laying out the expected technologies and transitions that will be needed to preserve Moore’s law. The Semiconductor Industry Association, a predominantly North American group that includes Intel, AMD, TSMC, GlobalFoundries, and IBM, started publishing roadmaps in 1992, and in 1998 the SIA joined up with similar organizations around the world to produce the International Technology Roadmap for Semiconductors. The most recent roadmap was published in 2013.

Problems with the original formulation of Moore’s law became apparent at an early date. In 1975, with more empirical data available, Gordon Moore himself updated the law to have a doubling time of 24 months rather than the initial 12. Still, for three decades, simple geometric scaling—just making everything on a chip smaller—enabled steady shrinks and conformed with Moore’s prediction.

In the 2000s, it was clear that this geometric scaling was at an end, but various technical measures were devised to keep pace of the Moore’s law curves. At 90nm, strained silicon was introduced; at 45nm, new materials to increase the capacitance of each transistor layered on the silicon were introduced. At 22nm, tri-gate transistors maintained the scaling.

But even these new techniques were up against a wall. The photolithography process used to transfer the chip patterns to the silicon wafer has been under considerable pressure: currently, light with a 193 nanometre wavelength is used to create chips with features just 14 nanometres. The oversized light wavelength is not insurmountable but adds extra complexity and cost to the manufacturing process. It has long been hoped that extreme UV, with a 13.5nm wavelength, will ease this constraint, but production-ready EUV technology has proven difficult to engineer.

Even with EUV, it’s unclear just how much further scaling is even possible; at 2nm, transistors would be just 10 atoms wide, and it’s unlikely that they’d operate reliably at such a small scale. Even if these problems were resolved, the specter of power usage and dissipation looms large: as the transistors are packed ever tighter, dissipating the energy that they use becomes ever harder.

The new techniques, such as strained silicon and tri-gate transistors, took more than a decade to put in production. EUV has been talked about for longer still. There’s also a significant cost factor. There’s a kind of undesired counterpart to Moore’s law, Rock’s law, which observes that the cost of a chip fabrication plant doubles every 4 years. Technology may provide ways to further increase the number of transistors packed into a chip, but the manufacturing facilities to build these chips may be prohibitively expensive—a situation compounded by the growing use of smaller, cheaper processors.

We’ve recently seen these factors cause real problems for chip companies. Intel originally planned to switch to 10nm in 2016 with the Cannonlake processor, a shrunk version of the 14nm Skylakes shipping today. In July last year, the company changed this plan. An extra processor generation, Kaby Lake, will be released in 2016, still using the 14nm process. Cannonlake and 10nm are still planned but are not due until the second half of 2017.

Compounding all this is that all these extra transistors have become increasingly hard to use. In the 1980s and 1990s the value of the extra transistors was obvious: the Pentium was much faster than the 486, the Pentium II much faster than the Pentium, and so on and so forth. Existing workloads gained substantial speed-ups just from processor upgrades, thanks to a combination of better processors (going from simple in-order processors to complex superscalar out-of-order processors) and higher clockspeeds. Those easy improvements stopped coming in the 2000s. Constrained by heat, clock speeds have largely stood still, and the performance of each individual processor core has increased only incrementally. What we see instead are multiple processor cores within a single chip. This increases the overall theoretical performance of a processor, but it can be difficult to actually exploit this improvement in software.

These difficulties mean that the Moore’s law-driven roadmap is now at an end. ITRS decided in 2014 that its next roadmap would no longer be beholden to Moore’s “law,” and Nature writes that the next ITRS roadmap, published next month, will instead take a different approach.

Rather than focus on the technology used in the chips, the new roadmap will take an approach it describes as “More than Moore.” The growth of smartphones and Internet of Things, for example, means that a diverse array of sensors and low power processors are now of great importance to chip companies. The highly integrated chips used in these devices mean that it’s desirable to build processors that aren’t just logic and cache, but which also include RAM, power regulation, analog components for GPS, cellular, and Wi-Fi radios, or even microelectromechanical components such as gyroscopes and accelerometers.

These different kinds of component traditionally use different manufacturing processes to handle their different needs, and the new roadmap will outline plans for bringing them together. Integrating the different manufacturing processes and handling the different materials will need new processes and supporting technology. For manufacturers building chips for these new markets, addressing this kind of problem is arguably more relevant than slavishly doubling the number of logic transistors.

There will also be a focus on new technology beyond the silicon CMOS process currently used. Intel has already announced that it will be dropping silicon at 7nm. Indium antimonide (InSb) and indium gallium arsenide (InGaAs) have both shown promise, and both offer much higher switching speeds at much lower power than silicon. Carbon, both in its nanotube and graphene forms, continues to be investigated and may prove better still.

While a lesser priority, scaling is not off the roadmap entirely. Beyond tri-gate transistors, perhaps around 2020, are “gate all around” transistors and nanowires. The mid-2020s could bring monolithic 3D chips, where a single piece of silicon has multiple layers of components that are built up on a single die.

As for the future, massive scaling isn’t off the cards completely. The use of alternative materials, different quantum effects, or even more exotic techniques such as superconducting may provide a way to bring back the easy scaling that was enjoyed for decades, or even the more complex scaling of the last fifteen years. A big enough boost could even reinvigorate the demand for processors that are just plain faster, rather than smaller or lower power.

But for now, lawbreaking is going to be the new normal. Moore’s law’s time as a guide of what will come next, and as a rule to be followed, is at an end.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/gpLhXmejpUo/

Original article

Congress returns to scrutiny of wealthy university endowments

Wealthy universities’ endowments are once again in the crosshairs on Capitol Hill.

Amid double-digit investment returns and growing public anxiety about student debt and the price of college, members of Congress are reviving their scrutiny of the nation’s richest colleges, an issue that largely was put on hold after the financial crisis in 2008.

Earlier this week two congressional committees sent letters to several dozen colleges and universities, seeking a wide range of information about how they manage their endowments and spend endowment funds.

Senator Orrin Hatch, chairman of the Senate Finance Committee, Representative Kevin Brady, chairman of the House Ways and Means Committee, and Representative Peter Roskam, chairman of the House Ways and Means Subcommittee on Oversight, wrote to 56 private colleges with endowments larger than $1 billion.

“Despite these large and growing endowments, many colleges and universities have raised tuition far in excess of inflation,” wrote the three lawmakers, all Republicans. The committees, they said, are looking into “how colleges and universities are using endowment assets to fulfill their charitable and educational purposes.” They instructed colleges to respond by April 1.

The letters follow a congressional hearing last fall in which House Republicans criticized large university endowments and executive compensation in higher education.

Representative Tom Reed of New York, a Republican, plans to introduce legislation that would require colleges with endowments of more than $1 billion to pay out 25 percent of their annual earnings to reduce the cost of attendance for “working families,” those earning between 100 and 600 percent of the poverty line, according to a fact sheet provided by his office.

If colleges were to cover the entire cost for those families with endowment earnings left over, they would then have to direct the money to reducing the cost of attendance for low-income families, presumably those earning less than 100 percent of the poverty line. Colleges that do not comply would face penalties, including the possible loss of tax-exempt status.

“We care about ensuring fairness in higher education and allowing every child to succeed without holding them back because of cost,” Reed said in a statement. “It’s only right that we begin looking for solutions to get the cost of higher education under control, and this is a step in the right direction in that process.”

With the exception of this past year – in which the average endowment grew by just 2.4 percent — college and university endowments have seen double-digit returns in recent years.

Catharine Bond Hill, president of Vassar College and an economist who studies higher education, says the latest round of attention to university endowments may also partly reflect growing concern about income inequality.

“The richer schools are getting richer and the poorer schools, in some cases, are getting poorer,” she said.

Endowment wealth is heavily concentrated at the richest institutions. Nearly 11 percent of colleges hold almost three-quarters of all endowment wealth among the 832 institutions that participate in the annual endowment study by the National Association of College and University Business Officers, according to a recent analysis by the Congressional Research Service.

Since the financial crisis, the nation’s 40 richest universities have also seen increases in their endowment assets that are more than double those of universities with fewer resources, according to Moody’s.

As congressional lawmakers channel the public frustration over those increases in wealth and rising sticker prices on tuition at those institutions, they’re returning to proposals that would force — or use tax benefits to prod — universities to spend more of their endowment funds.

The Congressional Research Service analysis also floats the previously discussed ideas of taxing endowments or endowment earnings or curbing the tax benefits associated with making donations to wealthy endowments. Donations to rich colleges, in particular, have received much attention recently in the wake of a series of large gifts, such as a $400 million contribution to Harvard’s engineering college. That gift sparked especially strong backlash, with some columns urging people to stop giving to such wealthy institutions.

But Hill, like many other higher education experts, argues that such proposals miss the mark. Because many wealthy institutions have already reduced the net price for low- and middle-income families, she said, further required spending on financial aid may end up benefiting wealthy families the most.

“It’s not clear that’s going to improve issues of access at these schools,” she said. “I would much rather see these schools increase their enrollment of low-income students.”

Ronald Ehrenberg, director of the Cornell University Higher Education Research Institute, said many of the proposals that lawmakers are considering don’t reflect a full understanding of how universities are financed.

On the one hand, Ehrenberg said, the focus of the congressional inquiry on only private colleges does reflect a more nuanced understanding. When lawmakers last probed the issue in 2007, they asked 136 institutions — public and private — for information about their endowment spending.

“I was very happy that they excluded public institutions, because I think they finally realized that the major factor at most public institutions is the failure of state support to keep up with costs and the growth of students,” he said.

At the same, though, he said some of the congressional attention reflects “this myth that if they were to require higher payout rates that this would lead to massive infusions of financial aid, and at most places that would not be the case.”

“Higher required spending rates lead to slower growth rates for endowments in the long run,” he said. “That’s what the universities are concerned about.”

As lawmakers return to considering changes to how the federal tax code treats university endowments, they have a sizable amount of work on which to draw.

In 2014, then Representative Dave Camp, a Republican who chaired the Ways and Means Committee, included a 1 percent tax on the investment earnings of college and university endowments as part of his tax code overhaul. It would have applied to colleges with endowments larger than $100,000 per student.

And previously, Senator Chuck Grassley, a Republican, in 2007 and 2008 prominently criticized wealthy university endowments, held hearings and sent letters to college and universities. Grassley floated the idea of requiring universities to spend a certain amount of their endowments each year. Although his plans were dropped, the scrutiny is widely credited with spurring more generous, no-loan financial aid packages for low- and middle-income students at the wealthiest institutions.

Dean Zerbe was senior counsel to the Senate Finance Committee when Grassley (and the panel’s top Democrat at the time, Senator Max Baucus) scrutinized university endowment spending.

That effort “really gave the tax policy community on the Hill an understanding of these issues, and I think there’s a real appetite to take a hard look at this and take it to the next level,” he said.

“It’s very hard to justify the current policy,” said Zerbe, who is now a partner at Zerbe Fingeret Frank & Jadav in Texas. “We’re sending billions of dollars in tax benefits to the wealthy colleges” without incentives to hold down tuition.

“It’s an area where you could find an enormous amount of common ground,” he said. “There’s a lot of rhetoric about the 1 percent in the election cycle. Well it is the 1 percent of colleges getting these benefits; it’s not any community college getting a tax break for endowments.”

But it’s not yet clear if Democrats, who have been pushing student loan debt as an election-year issue, will join in on efforts to go after university endowment spending. Some House Democrats said at a hearing last year that they thought the focus on endowments was a side issue from more pressing concerns on student loan debt.

Senator Patty Murray, the top Democrat on the Senate education committee, said at a press conference last month that Reed’s legislation, which would apply only to colleges with endowments above $1 billion, “doesn’t affect very many students,” adding that “we’re trying to have a broad agenda that really impacts a lot of families.”

Editorial Tags: 
Image Source: 
Getty Images

Original URL: https://www.insidehighered.com/news/2016/02/16/congress-returns-scrutiny-wealthy-university-endowments

Original article

IBM unveils new mainframe for secure hybrid clouds

IBM logo

More and more organizations are seeing the benefits of adopting the hybrid cloud, but they don’t want to risk sacrificing the security advantages of more traditional systems.

To help businesses tap into hybrid cloud without sacrificing security, IBM is announcing a new mainframe, the z13s. Building on the mainframe’s world-class performance and security profile, the z13s features new embedded security technologies, enhanced data encryption and tighter integrations with IBM Security solutions.

The z13s provides the foundation for a more secure, end-to-end hybrid cloud environment, allowing organizations to protect their most sensitive data without sacrificing performance. Features include a new cryptographic co-processor and hardware-accelerated cryptographic coprocessor cards provide encryption up to two times faster, so users can increase workload security without compromising throughput and response time.

The new z13s has up to 4TB of memory (eight times more than previous single-frame mainframes) along with faster processing speeds and sophisticated analytic capabilities. IBM Multi-factor Authentication (MFA) for z/OS has been integrated into the operating system, adding to overall security by requiring users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system.

It also has IBM’s Security Identity Governance and Intelligence software which helps prevent data loss by governing and auditing device access. Integrated into the mainframe, QRadar and Identity Governance use real-time alerts to focus on identified critical security threats, while Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring.

“Fast and secure transaction processing is core to the IBM mainframe, helping clients grow their digital business in a hybrid cloud environment,” says Tom Rosamilia, senior vice president, IBM Systems. “With the new IBM z13s, clients no longer have to choose between security and performance. This speed of secure transactions, coupled with new analytics technology helping to detect malicious activity and integrated IBM Security offerings, will help mid-sized clients grow their organization with peace of mind”.

New z13s systems will be available from next month and you can find out more about IBM’s z Systems portfolio on the company’s website.

Tomasz Bidermann/Shutterstock


Original URL: http://feeds.betanews.com/~r/bn/~3/lvWOwLCOCQc/

Original article

Why I No Longer Use MVC Frameworks

The worst part of my job these days is designing APIs for front-end developers. The conversation goes inevitably as:

Dev – So, this screen has data element x,y,z… could you please create an API with the response format {x: , y:, z: }

Me – Ok

I don’t even argue anymore. Projects end up with a gazillion APIs tied to screens that change often, which, by “design” require changes in the API and before you know it, you end up with lots of APIs and for each API many form factors and platform variants. Sam Newman has even started the process of institutionalizing that approach with the BFF pattern that suggests that it’s ok to develop specific APIs per type of device, platform and of course versions of your app. Daniel Jacobson explains that Netflix has been cornered to use a new qualifier for its “Experience APIs”: ephemeral. Sigh…

A couple of months ago, I started a journey to understand why we ended up here and what could be done about it, a journey that lead me to question the strongest dogma in application architecture, MVC, and where I touched the sheer power of reactive and functional programming, a journey focused on simplicity and scraping the bloat that our industry is so good at producing. I believe you might be interested in my findings.

The pattern behind every screen we use is MVC –Model-View-Controller. MVC was invented when there was no Web and software architectures were, at best, thick clients talking directly to a single database on primitive networks. And yet, decades later, MVC is still used, unabated, for building OmniChannel applications.

With the imminent release of Angular2, it might be a good time to re-evaluate the use of the MVC pattern and therefore the value MVC Frameworks bring to Application Architecture.

I first came across MVC in 1990 after NeXT released Interface Builder (It’s amazing to think that this piece of software is still relevant today). At the time, Interface Builder and MVC felt like a major step forward. In the late 90s the MVC pattern was adapted to work over HTTP (remember Struts?) and today MVC is, for all intents and purposes, the keystone of any application architecture.

Even React.js had to use an euphemism when they introduced a framework that, for once, seemed to depart significantly from the MVC dogma: “React is just the View in MVC”.

When I started to use React last year, I felt that there was something very different about it: you change a piece of data somewhere and, in an instant, without an explicit interaction between the view and the model, the entire UI changes (not just the values in fields and tables). That being said, I was just as quickly disappointed by React’s programming model, and apparently I was not alone. I share Andre Medeiros’ opinion:

React turned out to disappoint me in multiple ways, mainly through a poorly designed API which induces the programmer […] to mix multiple concerns in one component. 

As a server-side API designer, I came to the conclusion that there was no particular good way to weave API calls into a React front-end, precisely because React focuses just on the view and has no controller in its programming model whatsoever.

Facebook, so far, has resisted fixing that gap at the framework level. The React team first introduced the Flux pattern, which was equally disappointing and these days, Dan Abramov promotes another pattern, Redux, which goes somewhat in the right direction but does not offer the proper factoring to connect APIs to the front-end as I will show below.

You would think that between GWT, Android SDK and Angular, Google engineers would have a strong view (pun intended) as to what could be the best Front-End Architecture but when you read some of the design considerations of Angular2, you don’t necessarily get this warm feeling that, even at Google, people know what they are doing:

Angular 1 wasn’t built around the concept of components. Instead, we’d attach controllers to various [elements] of the page with our custom logic. Scopes would be attached or flow through, based on how our custom directives encapsulated themselves (isolate scope, anyone?).

Does a component-based Angular2 look a lot simpler? Not quite. The core package of Angular 2 alone has 180 semantics, and the entire framework comes close to a cool 500 semantics, and that’s, on top of HTML5 and CSS3. Who has time to learn and master that kind of framework to build a Web app? What happens when Angular3 comes around?

After using React and seeing what was coming in Angular2, I felt depressed: these frameworks systematically force me to use the BFF “Screen Scraping” pattern where every server-side API matches the dataset of a screen, in and out.

That’s when I had my “to hell with it” moment. I’ll just build a Web app without React, without Angular, no MVC framework whatsoever, to see if I could find a better articulation between the View and the underlying APIs.

What I really liked about React was the relationship between the model and the view. The fact that React is not template based and that the view itself has no way to request data felt like a reasonable path to explore (you can only to pass data to the view).

When you look long enough, you realize that the sole purpose of React is to decompose the view in a series of (pure) functions and the JSX syntax:

             

is nothing different than:

              V = f( M )

For instance, the Website of one of the projects I am working on right now, Gliiph, is built with such a function:

(Click on the image to enlarge it)

fig 1. The function responsible for generating the HTML of the site’s Slider component

That function is fed from the model:

(Click on the image to enlarge it)

fig 2. The mode behind the sliders

When you realize that a plain old JavaScript function can do the job just fine, then your next question is why you would use React at all?

The virtual-dom? If you feel like you need one (and I am not sure many people do), there are options and I expect more will be developed.

GraphQL? Not really. Don’t be fooled by the argument that if Facebook uses it profusely, it must be good for you. GraphQL is nothing more than a declarative way to create a view-model. Being forced to shape the model to match the view is the problem, not the solution. How could the React (as in reactive) team possibly think it’s ok to request data with “Client-specified queries”:

GraphQL is unapologetically driven by the requirements of views and the front-end engineers that write them. […] A GraphQL query, on the other hand, returns exactly what a client asks for and no more.

What the GraphQL team seems to have missed is that behind the JSX syntax, the subtle change is that functions isolate the model from the view. Unlike templates or “queries written by front-end engineers”, functions do not require the model to fit the view.

When the view is created from a function (as opposed to a template or a query) you can transform the model as needed to best represent the view without adding artificial constraints on the shape of the model.

For instance, if the view displays a value v and a graphical indicator as to whether this value is great, good or bad, there is no reason to have the indicator’s value in your model: the function should simply compute the value of the indicator from the value v provided by the model.

Now, it’s not a great idea to directly embed these computations in the the view, but it is not difficult to make the view-model a pure function as well, and hence, there is no particular good reason to use GraphQL when you need an explicit view-model:

                V = f( vm(M) )

As a veteran MDE practitioner, I can assure you that you are infinitely better off writing code than metadata, be it as a template or a complex query language like GraphQL.

This functional approach has several key benefits. First, just like React, it allows you to decompose your views into components. The natural interface they create allows you to “theme” your Web app or Website or render the view in different technologies (native for instance). The function implementations have the potential to enhance the way we implement responsive design as well.

I would not be surprised, for instance, if in the next few months, people start delivering HTML5 themes as component-based JavaScript functions. These days, that’s how I do all my Website projects, I pick up a template and immediately wrap it in JavaScript functions. I no longer use WordPress. I can get the best of HTML5 and CSS3 with pretty much the same level of effort (or less).

This approach also calls for a new kind of relationship between designers and developers. Anyone can write these JavaScript functions, especially the template designers. There is no “binding” syntax to learn, no JSX, no Angular template, just plain old JavaScript function.

Interestingly, from a reactive flow perspective, these functions can be deployed where it makes the most sense: on the server or on the client.

But most importantly, this approach allows the view to declare the minimum contract with the model and leaves the decision to the model as to what it the best way to bring this data to the view. Aspects like caching, lazy loading, orchestration, consistency are entirely under the control of the model. Unlike templates or GraphQL there is never a need to serve a direct request crafted from the view perspective.

Now that we have a way to decouple the model from the view, the next question is: how do you create a full application model from here? what would a “controller” look like? To answer that question, let’s go back to MVC.

Apple knows a thing or two about MVC since they “stole” the pattern from Xerox PARC in the early 80s and they have implemented it religiously since:

fig.3. the MVC Pattern

The core issue here is, as Andre Medeiros so eloquently puts it, that the MVC pattern is “interactive” (as opposed to Reactive). In traditional MVC, the action (controller) would call an update method on the model and upon success (or error) decide how to update the view. As he points out, it does not have to be that way, there is another equally valid, Reactive, path if you consider that actions should merely pass values to the model, regardless of the outcome, rather than deciding how the model should be updated.

The key question then becomes: how do you integrate actions in the reactive flow? If you want to understand a thing or two about Actions, you may want to take a look at TLA+. TLA stands for “Temporal Logic of Actions”, a formalism invented by Dr. Lamport, who got a Turing award for it. In TLA+, actions are pure functions:

             data’ = A (data)

I really like the TLA+ prime notation because it reinforces the fact that functions are mere transformations on a given data set.

With that in mind, a reactive MVC would probably look like:

             V = f( M.present( A(data) ) ) 

This expression stipulates that when an action is triggered, it computes a data set from a set of inputs (such as user inputs), that is presented to the model, which then decides whether and how to update itself. Once the update is complete, the view is rendered from the new model state. The reactive loop is closed. The way the model persists and retrieves its data is irrelevant to the reactive flow, and should certainly never, absolutely never, be “written by front-end engineers”. No apologies.

Actions, again, are pure functions, with no state and no side effect (with respect to the model, not counting logging for instance).

A Reactive MVC pattern is interesting because, except for the model (of course), everything else is a pure function. In all fairness, Redux implements that particular pattern, but with the unnecessary ceremony of React and a tiny bit of coupling between the model and the actions in the reducer. The interface between the actions and the model is pure message passing.

That being said, the Reactive MVC pattern, as it stands, is incomplete, it does not scale to real-world applications as Dan likes to say. Let’s take a simple example to illustrate why.

Let’s say we need to implement an application that controls a rocket launcher: once we start the countdown, the system will decrement the counter and when it reaches zero, pending all properties of the model being at nominal values, the launch of the rocket will be initiated.

This application has a simple state machine:

fig.4. the Rocket Launcher state machine

Both decrement and launch are “automatic” actions, it means that each time we enter (or re-enter) the counting state, the transition guards will be evaluated and if the counter value is greater than zero, the decrement action will be called upon and when the value is zero, the launch action will be called instead. An abort action can be undertaken at any point, which will transition the control system to the aborted state.

In MVC, that kind of logic would be implemented in the controller, perhaps triggered by a timer in the view.

This paragraph is very important, so please read carefully. We have seen that, in TLA+, the actions have no side effects and the resulting state is computed, once the model processed the action outputs and updated itself. That is a fundamental departure from the traditional state-machine semantics where the action specifies the resulting state, i.e. the resulting state is independent of the model. In TLA+, the actions that are enabled and therefore available to be triggered in the state representation (i.e. the view) are not linked directly to the action that triggered the state change. In other words, state machines should not be specified as tuples that connect two states (S1, A, S2) as they traditionally are, they are rather tuples of the form (Sk, Ak1, Ak2,…) that specify all the actions enabled, given a state Sk, with the resulting state being computed after an action has been applied to the system, and the model has processed the updates.

TLA+ semantics provides a superior way to conceptualize a system when you introduce a “state” object, separate from the actions and the view (which is merely a state representation).

The model in our example is as follows:

model = {

counter:  ,

started:  ,

aborted:  ,

launched:   }

The four (control) states of the system are associated to the following values of the model

            ready = {counter: 10, started: false, aborted: false, launched: false }

            counting = {counter: [0..10], started: true, aborted: false, launched: false }

            launched = {counter: 0, started: true, aborted: false, launched: true}

            aborted = {counter: [0..10], started: true, aborted: true, launched: false}

The model is specified by all the properties of the system and their potential values, while the state specifies the actions that are enabled, given a set of values. That kind of business logic must be implemented somewhere. We cannot expect the user could be trusted to know which actions are possible or not. There is simply no other way around it. Yet, that kind of business logic is difficult to write, debug and maintain, especially when you have no semantics available to describe it, such as in MVC.

Let’s write some code for our rocket launcher example. From a TLA+ perspective, the next-action predicate logically follows the rendering of the state. Once the current state has been represented, the next step is to execute the next-action predicate, which computes and executes the next action, if any, which in turn will present its data to the model which will initiate the rendering of a new state representation, and so on.

(Click on the image to enlarge it)

fig.5. the rocket launcher implementation

Note that in a client/server architecture we would need to use a protocol like WebSocket (or polling when WebSocket is not available) to render the state representation properly after an automatic action is triggered,

I have written a very thin, open source, library in Java and JavaScript that structures the state object with proper TLA+ semantics and provided samples that are using WebSocket, Polling and Queuing to implement the browser/server interactions. As you can see in the rocket launcher example, you should not feel obligated to use that library. The state implementation is relatively easy to code once you understand how to write it.

I believe that we have now all the elements to formally introduce a new pattern, as an alternative to MVC, the SAM pattern (State-Action-Model), a reactive, functional, pattern with its roots in React.js and TLA+.

The SAM pattern can be represented by the following expression:

         V = S( vm( M.present( A(data) ) ), nap(M))

which stipulates that the view V of a system can be computed, after an action A has been applied, as a pure function of the model.

In SAM, A (actions), vm (view-model), nap (next-action predicate) and S (state representation) are and must all be pure functions. With SAM, what we commonly call the “state” (the values of the properties of the system) is entirely confined to the model and the logic that changes these values is not visible outside the model itself.

As a side note, the next-action predicate, nap() is a call-back invoked once the state representation has been created, and, on its way to be rendered to the user.

fig.6. the State-Action-Mode (SAM) Pattern

The pattern itself is independent of any protocol (and can be implemented without difficulty over HTTP) and any client/server topology.

SAM does not imply that you always have to use the semantics of a state machine to derive the content of the view. When actions are solely triggered from the view, the next-action predicate is a null function. It might be a good practice, though, to clearly surface the control states of the underlying state machine because the view might look different from one (control) state to another.

On the other hand, if your state machine involves automatic actions, neither your actions nor your model would be pure without a next-action predicate: either some actions will have to become stateful or the model will have to trigger actions which is not its role. Incidentally, and unintuitively, the state object does not hold any “state”, it is again a pure function which renders the view and computes the next-action predicate, both from the model property values.

The key benefit of this new pattern is that it clearly separates the CRUD operations from the Actions. The Model is responsible for its persistence which will be implemented with CRUD operations, not accessible from the view. In particular, the view will never be in the position to “fetch” data, the only things the view can do are to request the current state representation of the system and initiate a reactive flow by triggering actions.

Actions merely represent an authorized conduit to propose changes to the model. They, themselves, have no side effect (on the model). When necessary, actions may invoke 3rd party APIs (again, with no side effect to the model), for instance, a change of address action would want to call an address validation service and present to the model the address returned by that service.

This is how a “Change of Address” action, calling an address validation API would be implemented:

(Click on the image to enlarge it)

fig.7. the “Change of Address” implementation

The elements of the pattern, actions and models, can be composed liberally:

Function Composition

data’ = A(B(data))

Peer Composition (same data set presented to two models)

M1.present(data’)

M2.present(data’)

Parent-Child Composition (parent model controls data set presented to the child)

M1.present(data’,M2)

function present(data, child) {

            // perform updates

            …

            // synch models

            child.present(c(data))

}

Publish/Subscribe Composition

M1.on(“topic”, present )

M2.on(“topic”, present )

Or

M1.on(“data”, present )

M2.on(“data”, present )

For architects who thinks in terms of Systems or Record and Systems of Engagement, the pattern helps clarify the interface between these two layers (fig. 8) with the model being responsible all interactions with the systems of record.

fig 8. SAM Composition model

The entire pattern itself is composable and you could implement a SAM instance running in the browser to support a wizard-like behavior (e.g. a ToDo application) interacting with a SAM instance on the server:

fig. 9 SAM instance composition

Please note that the inner SAM instance is delivered as part of the state representation generated by the outer instance.

Session rehydration should occur prior to triggering the action (fig. 10). SAM enables an interesting composition, where the view could call a third party action providing a token and a call back pointing to a system action that will authorize and validate the call, before presenting the data to the model.

fig. 10 Session Management with SAM

From a CQRS perspective, the pattern does not make a particular distinction between Queries and Commands, though the underlying implementation needs to make that distinction. A search or query “action” is simply passing a set of parameters to the model. We can adopt a convention (e.g. the underscore prefix) to differentiate queries from commands, or we could use two distinct present methods on the model:

 { _name : ‘/^[a]$/i’ } // Names that start with A or a
 { _customerId: ‘123’ } // customer with id = 123

The model would perform the necessary operations to match the query, update its content and trigger the rendering of the view. A similar set of conventions could be used for creating, updating or deleting elements of the model. There a number of styles which can be implemented to pass the action outputs to the model (data set, events, actions…). There are pros and cons to each approach and in the end it might come to preferences. I favor the data set approach.

From an exception perspective, just like in React, it is expected that the model will hold the corresponding exception as property values (either presented by the action, or returned by a CRUD operation). These property values will be used while rendering the state representation to display the exception.

From a caching perspective, SAM offers a caching option at the state representation level. Intuitively, caching the results of these state representation functions should lead to a higher hit rate since we are now triggering the cache at the component/state level rather than the action/response level.

The reactive and functional structure of the pattern makes replay and unit testing a breeze.

The SAM pattern changes completely the paradigm of front-end architectures because, on the foundation of TLA+, the business logic can be clearly delineated into:

  • Actions as pure functions
  • CRUD operations in the model
  • States which control automatic Actions

From my perspective as an API designer, the pattern pushes the responsibility of the design of APIs back to the server, with the smallest contract possible between the view and the model.

Actions, as pure functions, can be reused across models as long as a model accepts the corresponding output of the action. We can expect that libraries of actions, themes (state representations), and possibly models will flourish since they now be independently composed.

With SAM, microservices fit naturally behind the model. Frameworks like Hivepod.io can be plugged in, pretty much as-is, at that level.

Most importantly the pattern, like React, does not require any data binding or template.

Over time I expect that SAM will contribute to make the virtual-dom a permanent feature of the browser and new state representations will be directly processed via a dedicated API.

I found this journey to be transformative: decades of Object Orientation seem to be all but gone. I can no longer think in terms other than reactive or functional. The kinds of things I have been building with SAM and the speed at which I can build them has been are unprecedented. One more thing. I can now focus on designing APIs and Services that do not follow the screen scraping pattern.

I wanted to thank and acknowledge the people who kindly accepted to review this article: Prof. Jean Bezivin, Prof. Joëlle Coutaz, Braulio Diez, Adron Hall, Edwin Khodabackchian, Guillaume Laforge, Pedro Molina, Arnon Rotem-Gal-Oz.

Jean-Jacques Dubray is the founder of xgen.io and gliiph. He has been building Service Oriented Architectures, and API platforms for the last 15 years. He is a former member of the research staff at HRL and earned his Ph.D. from the University of Provence (Luminy campus), home of the Prolog language. He is the inventor of the BOLT methodology.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/u8DZoE2Rh6Y/no-more-mvc-frameworks

Original article

Wikipedia starts work on $2.5M internet search engine project to rival Google [pdf]

Summary

This is the original grant agreement for the knowledge engine grant awarded to the Wikimedia Foundation.

Licensing

Click on a date/time to view the file as it appeared at that time.

There are no pages that link to this file.

This file contains additional information, probably added from the digital camera or scanner used to create or digitize it.
If the file has been modified from its original state, some details may not fully reflect the modified file.


Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/re64uLOpZ2A/File%3AKnowledge_engine_grant_agreement.pdf

Original article

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑

%d bloggers like this: