Search This Blog

Monday, February 18, 2019

Digital journalism

From Wikipedia, the free encyclopedia

Digital journalism also known as online journalism is a contemporary form of journalism where editorial content is distributed via the Internet as opposed to publishing via print or broadcast. What constitutes 'digital journalism' is debated by scholars. However the primary product of journalism, which is news and features on current affairs, is presented solely or in combination as text, audio, video, or some interactive forms like newsgames, and disseminated through digital media technology.
Fewer barriers to entry, lowered distribution costs, and diverse computer networking technologies have led to the widespread practice of digital journalism. It has democratized the flow of information that was previously controlled by traditional media including newspapers, magazines, radio, and television.

Some have asserted that a greater degree of creativity can be exercised with digital journalism when compared to traditional journalism and traditional media. The digital aspect may be central to the journalistic message and remains, to some extent, within the creative control of the writer, editor, and/or publisher.

Overview

There is no absolute agreement as to what constitutes digital journalism. Mu Lin argues that "Web and mobile platforms demand us to adopt a platform-free mindset for an all-inclusive production approach – create the (digital) contents first, then distribute via appropriate platforms."The repurposing of print content for an online audience is sufficient for some, while others require content created with the digital medium's unique features like hypertextuality. Fondevila Gascón adds multimedia and interactivity to complete the digital journalism essence. For Deuze, online journalism can be functionally differentiated from other kinds of journalism by its technological component which journalists have to consider when creating or displaying content. Digital journalistic work may range from purely editorial content like CNN (produced by professional journalists) online to public-connectivity websites like Slashdot (communication lacking formal barriers of entry). The difference of digital journalism from traditional journalism may be in its reconceptualized role of the reporter in relation to audiences and news organizations. The expectations of society for instant information was important for the evolution of digital journalism. However, it is likely that the exact nature and roles of digital journalism will not be fully known for some time.

History

The first type of digital journalism, called teletext, was invented in the UK in 1970. Teletext is a system allowing viewers to choose which stories they wish to read and see it immediately. The information provided through teletext is brief and instant, similar to the information seen in digital journalism today. The information was broadcast between the frames of a television signal in what was called the Vertical Blanking Interval or VBI. 

American journalist, Hunter S. Thompson relied on early digital communication technology beginning by using a fax machine to report from the 1971 US presidential campaign trail as documented in his book Fear and Loathing on the Campaign Trail

After the invention of teletext was the invention of videotex, of which Prestel was the world's first system, launching commercially in 1979 with various British newspapers such as the Financial Times lining up to deliver newspaper stories online through it. Videotex closed down in 1986 due to failing to meet end-user demand.

American newspaper companies took notice of the new technology and created their own videotex systems, the largest and most ambitious being Viewtron, a service of Knight-Ridder launched in 1981. Others were Keycom in Chicago and Gateway in Los Angeles. All of them had closed by 1986.
Next came computer Bulletin Board Systems. In the late 1980s and early 1990s, several smaller newspapers started online news services using BBS software and telephone modems. The first of these was the Albuquerque Tribune in 1989. 

Computer Gaming World in September 1992 broke the news of Electronic Arts' acquisition of Origin Systems on Prodigy, before its next issue went to press. Online news websites began to proliferate in the 1990s. An early adopter was The News & Observer in Raleigh, North Carolina which offered online news as Nando. Steve Yelvington wrote on the Poynter Institute website about Nando, owned by The N&O, by saying "Nando evolved into the first serious, professional news site on the World Wide Web". It originated in the early 1990s as "NandO Land". It is believed that a major increase in digital online journalism occurred around this time when the first commercial web browsers, Netscape Navigator (1994), and Internet Explorer (1995). By 1996, most news outlets had an online presence. Although journalistic content was repurposed from original text/video/audio sources without change in substance, it could be consumed in different ways because of its online form through toolbars, topically grouped content, and intertextual links. A twenty-four-hour news cycle and new ways of user-journalist interaction web boards were among the features unique to the digital format. Later, portals such as AOL and Yahoo! and their news aggregators (sites that collect and categorize links from news sources) led to news agencies such as The Associated Press to supplying digitally suited content for aggregation beyond the limit of what client news providers could use in the past.

Also, Salon, was founded in 1995. In 2001 the American Journalism Review called Salon the Internet's "preeminent independent venue for journalism."

In 2008, for the first time, more Americans reported getting their national and international news from the internet, rather than newspapers. Young people aged 18 to 29 now primarily get their news via the Internet, according to a Pew Research Center report. Audiences to news sites continued to grow due to the launch of new news sites, continued investment in news online by conventional news organizations, and the continued growth in internet audiences overall. Sixty-five percent of youth now primarily access the news online.

Mainstream news sites are the most widespread form of online newsmedia production. As of 2000, the vast majority of journalists in the Western world now use the internet regularly in their daily work. In addition to mainstream news sites, digital journalism is found in index and category sites (sites without much original content but many links to existing news sites), meta- and comment sites (sites about news media issues like media watchdogs), and share and discussion sites (sites that facilitate the connection of people, like Slashdot). Blogs are also another digital journalism phenomenon capable of fresh information, ranging from personal sites to those with audiences of hundreds of thousands. Digital journalism is involved in the cloud journalism phenomenon, a constant flow of contents in the Broadband Society.

Prior to 2008, the industry had hoped that publishing news online would prove lucrative enough to fund the costs of conventional newsgathering. In 2008, however, online advertising began to slow down, and little progress was made towards development of new business models. The Pew Project for Excellence in Journalism describes its 2008 report on the State of the News Media, its sixth, as its bleakest ever. Despite the uncertainty, online journalists report expanding newsrooms. They believe advertising is likely to be the best revenue model supporting the production of online news.

Many news organizations based in other media also distribute news online, but the amount they use of the new medium varies. Some news organizations use the Web exclusively or as a secondary outlet for their content. The Online News Association, founded in 1999, is the largest organization representing online journalists, with more than 1,700 members whose principal livelihood involves gathering or producing news for digital presentation.

The Internet challenges traditional news organizations in several ways. Newspapers may lose classified advertising to websites, which are often targeted by interest instead of geography. These organizations are concerned about real and perceived loss of viewers and circulation to the Internet.

Hyperlocal journalism is journalism within a very small community. Hyperlocal journalism, like other types of digital journalism, is very convenient for the reader and offers more information than former types of journalism. It is free or inexpensive.

Impact on readers

Digital journalism allows for connection and discussion at levels that print does not offer on its own. People can comment on articles and start discussion boards to discuss articles. Before the Internet, spontaneous discussion between readers who had never met was impossible. The process of discussing a news item is a big portion of what makes for digital journalism. People add to the story and connect with other people who want to discuss the topic. 

Digital journalism creates an opportunity for niche audiences, allowing people to have more options as to what to view and read.

Digital journalism opens up new ways of storytelling; through the technical components of the new medium, digital journalists can provide a variety of media, such as audio, video, and digital photography

Digital journalism represents a revolution of how news is consumed by society. Online sources are able to provide quick, efficient, and accurate reporting of breaking news in a matter of seconds, providing society with a synopsis of events as they occur. Throughout the development of the event, journalists are able to feed online sources the information keeping readers up-to-date in mere seconds. The speed in which a story can be posted can affect the accuracy of the reporting in a way that doesn't usually happen in print journalism. Before the emergence of digital journalism the printing process took much more time, allowing for the discovery and correction of errors.

News consumers must become Web literate and use critical thinking to evaluate the credibility of sources. Because it is possible for anyone to write articles and post them on the Internet, the definition of journalism is changing. Because it is becoming increasingly simple for the average person to have an impact in the news world through tools like blogs and even comments on news stories on reputable news websites, it becomes increasingly difficult to sift through the massive amount of information coming in from the digital area of journalism. 

There are great advantages with digital journalism and the new blogging evolution that people are becoming accustomed to, but there are disadvantages. For instance, people are used to what they already know and can't always catch up quickly with the new technologies in the 21st century. The goals of print and digital journalism are the same, although different tools are needed to function. 

The interaction between the writer and consumer is new, and this can be credited to digital journalism. There are many ways to get personal thoughts on the Web. There are some disadvantages to this, however, the main one being factual information. There is a pressing need for accuracy in digital journalism, and until they find a way to press accuracy, they will still face some criticism. 

One major dispute regards the credibility of these online news websites. A digital journalism credibility study performed by the Online News Association compares the online public credibility ratings to actual media respondent credibility ratings. Looking at a variety of online media sources, the study found that overall the public saw online media as more credible than it actually is.

The effects of digital journalism are evident worldwide. This form of journalism has pushed journalists to reform and evolve. Older journalists who are not tech savvy have felt the blunt force of this. In recent months, a number of older journalists have been pushed out and younger journalists brought in because of their lower cost and ability to work in advanced technology settings.

Impact on publishers

Many newspapers, such as The New York Times, have created online sites to remain competitive and have taken advantage of audio, video, and text linking to remain at the top of news consumers' lists as most of the news enthusiasm now reach their base through hand held devices such as smart phones, tables etc. Hence audio or video backing is definite advantage. 

Newspapers rarely break news stories any more, with most websites reporting on breaking news before the cable news channels. Digital journalism allows for reports to start out vague and generalized, and progress to a better story. Newspapers and TV cable are at a disadvantage because they generally can only put together stories when an ample amount of detail and information are available. Often, newspapers have to wait for the next day, or even two days later if it is a late-breaking story, before being able to publish it. Newspapers lose a lot of ground to their online counterparts, with ad revenue shifting to the Internet, and subscription to the printed paper decreasing. People are now able to find the news they want, when they want, without having to leave their homes or pay to receive the news, even though there are still people who are willing to pay for online journalistic content.

Because of this, many people have viewed digital journalism as the death of journalism. According to communication scholar Nicole Cohen, "four practices stand out as putting pressure on traditional journalism production: outsourcing, unpaid labour, metrics and measurement, and automation". Free advertising on websites such as Craigslist has transformed how people publicize; the Internet has created a faster, cheaper way for people to get news out, thus creating the shift in ad sales from standard newspapers to the Internet. There has been a substantial effect of digital journalism and media on the newspaper industry, with the creation of new business models. It is now possible to contemplate a time in the near future when major towns will no longer have a newspaper and when magazines and network news operations will employ no more than a handful of reporters. Many newspapers and individual print journalists have been forced out of business because of the popularity of digital journalism. The newspapers that have not been willing to be forced out of business have attempted to survive by saving money, laying off staff, shrinking the size of the publications, eliminating editions, as well as partnering with other businesses to share coverage and content. In 2009, one study concluded that most journalists are ready to compete in a digital world and that these journalists believe the transition from print to digital journalism in their newsroom is moving too slowly. Some highly specialized positions in the publishing industry have become obsolete. The growth in digital journalism and the near collapse of the economy has also led to downsizing for those in the industry.

Students wishing to become journalists now need to be familiar with digital journalism in order to be able to contribute and develop journalism skills. Not only must a journalist analyze their audience and focus on effective communication with them, they have to be quick; news websites are able to update their stories within minutes of the news event. Other skills may include creating a website and uploading information using basic programming skills. 

Critics believe digital journalism has made it easier for individuals who are not qualified journalists to misinform the general public. Many believe that this form of journalism has created a number of sites that do not have credible information. Sites such as PerezHilton.com have been criticized for blurring the lines between journalism and opinionated writing.

Some critics believe that newspapers should not switch to a solely Internet-based format, but instead keep a component of print as well as digital. 

Digital journalism allows citizens and readers the opportunity to join in on threaded discussions relating to a news article that has been read by the public. This offers an excellent source for writers and reporters to decide what is important and what should be omitted in the future. These threads can provide useful information to writers of digital journalism so that future articles can be pruned and improved to possibly create a better article the next time around.

Implications on traditional Journalism

Digitization is currently causing many changes to traditional journalistic practice. The labor of journalists in general is becoming increasingly dependent on digital journalism. Scholars outline that this is actually a change to the execution of journalism and not the conception part of the labor process. They also contend that this is simply the de-skilling of some skills and the up-skilling of others. This theory is in contention to the notion that technological determinism is negatively effecting journalism, as it should be understood that it is just changing the traditional skill set. Communication scholar Nicole Cohen believes there are several trends putting pressure on this traditional skill set. Some of which being outsourcing, algorithms, and automation. Although she believes that technology could be used to improve Journalism, she feels the current trends in digital journalism are so far affecting the practice in a negative way.

There is also the impact that digital journalism is facing due to citizen journalism. Because digital journalism takes place online and is contributed mostly by citizens on user generated content sites, there is competition growing between the two. Citizen journalism allows anyone to post anything, and because of that, journalists are being forced by their employers to publish more news content than before, which often means rushing news stories and failing to confirm information.

Work outside traditional press

The Internet has also given rise to more participation by people who are not normally journalists, such as with Indy Media (Max Perez). 

Bloggers write on web logs or blogs. Traditional journalists often do not consider bloggers to automatically be journalists. This has more to do with standards and professional practices than the medium. For instance, crowdsourcing and crowdfunding journalism attracts amateur journalists, as well as ambitious professionals that are restrained by the boundaries set by traditional press. However, the implication of these types of journalism is that it disregards the professional norms of journalistic practices that ensures accuracy and impartiality of the content. But, as of 2005, blogging has generally gained at least more attention and has led to some effects on mainstream journalism, such as exposing problems related to a television piece about President George W. Bush's National Guard Service. 

Recent legal judgements have determined that bloggers are entitled to the same protections as other journalists subject to the same responsibilities. In the United States, the Electronic Frontier Foundation has been instrumental in advocating for the rights of journalist bloggers.

In Canada, the Supreme Court of Canada ruled that:"[96] A second preliminary question is what the new defence should be called. In arguments before us, the defence was referred to as the responsible journalism test. This has the value of capturing the essence of the defence in succinct style. However, the traditional media are rapidly being complemented by new ways of communicating on matters of public interest, many of them online, which do not involve journalists. These new disseminators of news and information should, absent good reasons for exclusion, be subject to the same laws as established media outlets. I agree with Lord Hoffmann that the new defence is "available to anyone who publishes material of public interest in any medium": Jameel, at para. 54."

Other significant tools of on-line journalism are Internet forums, discussion boards and chats, especially those representing the Internet version of official media. The widespread use of the Internet all over the world created a unique opportunity to create a meeting place for both sides in many conflicts, such as the Israeli–Palestinian conflict and the First and Second Chechen Wars. Often this gives a unique chance to find new, alternative solutions to the conflict, but often the Internet is turned into the battlefield by contradicting parties creating endless "online battles."

Internet radio and podcasts are other growing independent media based on the Internet.

Blogs

With the rise of digital media, there is a move from the traditional journalist to the blogger or amateur journalist. Blogs can be seen as a new genre of journalism because of their "narrative style of news characterized by personalization" that moves away from traditional journalism's approach, changing journalism into a more conversational and decentralized type of news. Blogging has become a large part of the transmitting of news and ideas across cites, states, and countries, and bloggers argue that blogs themselves are now breaking stories. Even online news publications have blogs that are written by their affiliated journalists or other respected writers. Blogging allows readers and journalists to be opinionated about the news and talk about it in an open environment. Blogs allow comments where some news outlets do not, due to the need to constantly monitor what is posted. By allowing comments, the reader can interact with a story instead of just absorbing the words on the screen. According to one 2007 study, 15% of those who read blogs read them for news.

However, many blogs are highly opinionated and have a bias. Some are not verified to be true. The Federal Trade Commission (FTC) established guidelines mandating that bloggers disclose any free goods or services they receive from third parties in 2009 in response to a question of the integrity of product and service reviews in the online community.

Citizen journalism

Digital journalism's lack of a traditional "editor" has given rise to citizen journalism. The early advances that the digital age offered journalism were faster research, easier editing, conveniences, and a faster delivery time for articles. The Internet has broadened the effect that the digital age has on journalism. Because of the popularity of the Internet, most people have access, and can add their forms of journalism to the information network. This allows anyone who wants to share something they deem important that has happened in their community. Individuals who are not professional journalists who present news through their blogs or websites are often referred to as citizen journalists. One does not need a degree to be a citizen journalist. Citizen journalists are able to publish information that may not be reported otherwise, and the public has a greater opportunity to be informed. Some companies use the information that a citizen journalist relays when they themselves can not access certain situations, for example, in countries where freedom of the press is limited. Anyone can record events happening and send it anywhere they wish, or put it on their website. Non-profit and grass roots digital journalism sites may have far fewer resources than their corporate counterparts, yet due to digital media are able to have websites that are technically comparable. Other media outlets can then pick up their story and run with it as they please, thus allowing information to reach wider audiences. 

For citizen journalism to be effective and successful, there needs to be citizen editors, their role being to solicit other people to provide accurate information and to mediate interactivity among users. An example can be found in the start up of the South Korean online daily newspaper, OhMyNews, where the founder recruited several hundred volunteer "citizen reporters" to write news articles which were edited and processed by four professional journalists.

Legal issues

One emerging problem with online journalism in the United States is that, in many states, individuals who publish only on the Web do not enjoy the same First Amendment rights as reporters who work for traditional print or broadcast media. As a result, unlike a newspaper, they are much more liable for such things as libel. In California, however, protection of anonymous blog sources was ruled to be the same for both kinds of journalism. O'Grady v. Superior Court, 44 Cal. Rptr. 3d 72 (Cal. Ct. App. 2006); O'Grady v. Superior Court, 44 Cal. Rptr. 3d 72, 139 Cal.App. 4th 1423, modified by O'Grady v. Superior Court, 140 Cal.App. 4th 675b, 2006.

Extra-jurisdictional enforcement

In Canada there are more ambiguities, as Canadian libel law permits suits to succeed even if no false statements of fact are involved, and even if matters of public controversy are being discussed. In British Columbia, as part of "a spate of lawsuits" against online news sites, according to legal columnist Michael Geist, several cases have put key issues in online journalism up for rulings. Geist mentioned that Green Party of Canada financier Wayne Crookes filed a suit in which he alleged damages for an online news service that republished resignation letters from that party and let users summarize claims they contained. He had demanded access to all the anonymous sources confirming the insider information, which Geist believed would be extremely prejudicial to online journalism. The lawsuit, "Crookes versus open politics", attracted attention from the BBC and major newspapers, perhaps because of its humorous name. Crookes had also objected to satire published on the site, including use of the name "gang of Crookes" for his allies. Subsequently, Crookes sued Geist, expanding the circle of liability. Crookes also sued Google, Wikipedia, Yahoo, PBwiki, domain registrars and Green bloggers who he felt were associated with his political opponents. Crookes' attempt to enforce BC's plaintiff-friendly libel laws on California, Ontario and other jurisdictions led to an immediate backlash in bad publicity but the legal issues remain somewhat unresolved as of November 2009. Crookes lost four times on the grounds that he had not shown anyone in BC had actually read the materials on the minor websites, but this left the major question unresolved: How to deal with commentary deemed fair in one jurisdiction but actionable in another, and how to ensure that universal rights to free speech and reputation are balanced in a way that does not lead to radically different outcomes for two people who might for instance participate in a conversation on the Internet.

International issues

Non-democratic regimes that do not respect international human rights law present special challenges for online journalism:
  • Persons reporting from those regimes or with relatives under those regimes may be intimidated, harassed, tortured or killed and the risk of their exposure generally rises if they become involved in a private dispute and are subjected to civil discovery, or if a plaintiff or police officer or government official pressures an international service provider to disclose their identity.
  • If print and broadcast journalists are excluded, unverifiable reports from persons on the spot (as during the Iran election crisis of 2009) may be the only way to relay news at all—each individual incident may be unverifiable though statistically a much more representative sample of events might be gathered this way if enough citizens are participating in gathering the news.
  • Court processes that do not explicitly respect the rights of fair comment on public issues, political expression in general, religious freedoms, the right to dissent government decisions or oppose power figures, could be imposed on persons who merely comment on a blog or wiki. If judgments can be enforced at a distance, this may require expensive legal responses or chill on comment while cases move through a remote court, with the proceedings possibly even being heard in a foreign language under rules the commentator never heard of before. If people from relatively free countries engage in conversations with those from oppressive countries, for instance on homosexuality, they may actually contribute to exposing and loss of human rights by their correspondents.

News collections

The Internet also offers options such as personalized news feeds and aggregators, which compile news from different websites into one site. One of the most popular news aggregators is Google News. Others include Topix.net, and TheFreeLibrary.com.

But, some people see too much personalization as detrimental. For example, some fear that people will have narrower exposure to news, seeking out only those commentators who already agree with them. 

As of March 2005, Wikinews rewrites articles from other news organizations. Original reporting remains a challenge on the Internet as the burdens of verification and legal risks (especially from plaintiff-friendly jurisdictions like BC) remain high in the absence of any net-wide approach to defamation.

Web analytics

From Wikipedia, the free encyclopedia

Web analytics is the measurement, collection, analysis and reporting of web data for purposes of understanding and optimizing web usage. However, Web analytics is not just a process for measuring web traffic but can be used as a tool for business and market research, and to assess and improve the effectiveness of a website. Web analytics applications can also help companies measure the results of traditional print or broadcast advertising campaigns. It helps one to estimate how traffic to a website changes after the launch of a new advertising campaign. Web analytics provides information about the number of visitors to a website and the number of page views. It helps gauge traffic and popularity trends which is useful for market research.

Basic steps of the web analytics process

Basic Steps of Web Analytics Process

Most web analytics processes come down to four essential stages or steps, which are:
  • Collection of data: This stage is the collection of the basic, elementary data. Usually, these data are counts of things. The objective of this stage is to gather the data.
  • Processing of data into information: This stage usually take counts and make them ratios, although there still may be some counts. The objective of this stage is to take the data and conform it into information, specifically metrics.
  • Developing KPI: This stage focuses on using the ratios (and counts) and infusing them with business strategies, referred to as key performance indicators (KPI). Many times, KPIs deal with conversion aspects, but not always. It depends on the organization.
  • Formulating online strategy: This stage is concerned with the online goals, objectives, and standards for the organization or business. These strategies are usually related to making money, saving money, or increasing marketshare.
Another essential function developed by the analysts for the optimization of the websites are the experiments
  • Experiments and testings: A/B testing is a controlled experiment with two variants, in online settings, such as web development.
The goal of A/B testing is to identify changes to web pages that increase or maximize a statistically tested result of interest. 

Each stage impacts or can impact (i.e., drives) the stage preceding or following it. So, sometimes the data that is available for collection impacts the online strategy. Other times, the online strategy affects the data collected.

Web analytics technologies

There are at least two categories of web analytics; off-site and on-site web analytics.
  1. Off-site web analytics refers to web measurement and analysis regardless of whether you own or maintain a website. It includes the measurement of a website's potential audience (opportunity), share of voice (visibility), and buzz (comments) that is happening on the Internet as a whole.
  2. On-site web analytics, the more common of the two, measure a visitor's behavior once on your website. This includes its drivers and conversions; for example, the degree to which different landing pages are associated with online purchases. On-site web analytics measures the performance of your website in a commercial context. This data is typically compared against key performance indicators for performance, and used to improve a website or marketing campaign's audience response. Google Analytics and Adobe Analytics are the most widely used on-site web analytics service; although new tools are emerging that provide additional layers of information, including heat maps and session replay.
Historically, web analytics has been used to refer to on-site visitor measurement. However, this meaning has become blurred, mainly because vendors are producing tools that span both categories. Many different vendors provide on-site web analytics software and services. There are two main technical ways of collecting the data. The first and traditional method, server log file analysis, reads the logfiles in which the web server records file requests by browsers. The second method, page tagging, uses JavaScript embedded in the webpage to make image requests to a third-party analytics-dedicated server, whenever a webpage is rendered by a web browser or, if desired, when a mouse click occurs. Both collect data that can be processed to produce web traffic reports.

Web analytics data sources

The fundamental goal of web analytics is to collect and analyze data related to web traffic and usage patterns. The data mainly comes from four sources:
  1. Direct HTTP request data: directly comes from HTTP request messages (HTTP request headers).
  2. Network level and server generated data associated with HTTP requests: not part of an HTTP request, but it is required for successful request transmissions. For example, IP address of a requester.
  3. Application level data sent with HTTP requests: generated and processed by application level programs (such as JavaScript, PHP, and ASP.Net), including session and referrals. These are usually captured by internal logs rather than public web analytics services.
  4. External data: can be combined with on-site data to help augment the website behavior data described above and interpret web usage. For example, IP addresses are usually associated with Geographic regions and internet service providers, e-mail open and click-through rates, direct mail campaign data, sales and lead history, or other data types as needed.

Web server log file analysis

Web servers record some of their transactions in a log file. It was soon realized that these log files could be read by a program to provide data on the popularity of the website. Thus arose web log analysis software

In the early 1990s, website statistics consisted primarily of counting the number of client requests (or hits) made to the web server. This was a reasonable method initially, since each website often consisted of a single HTML file. However, with the introduction of images in HTML, and websites that spanned multiple HTML files, this count became less useful. The first true commercial Log Analyzer was released by IPRO in 1994.

Two units of measure were introduced in the mid-1990s to gauge more accurately the amount of human activity on web servers. These were page views and visits (or sessions). A page view was defined as a request made to the web server for a page, as opposed to a graphic, while a visit was defined as a sequence of requests from a uniquely identified client that expired after a certain amount of inactivity, usually 30 minutes. The page views and visits are still commonly displayed metrics, but are now considered rather rudimentary.

The emergence of search engine spiders and robots in the late 1990s, along with web proxies and dynamically assigned IP addresses for large companies and ISPs, made it more difficult to identify unique human visitors to a website. Log analyzers responded by tracking visits by cookies, and by ignoring requests from known spiders.

The extensive use of web caches also presented a problem for log file analysis. If a person revisits a page, the second request will often be retrieved from the browser's cache, and so no request will be received by the web server. This means that the person's path through the site is lost. Caching can be defeated by configuring the web server, but this can result in degraded performance for the visitor and bigger load on the servers.

Page tagging

Concerns about the accuracy of log file analysis in the presence of caching, and the desire to be able to perform web analytics as an outsourced service, led to the second data collection method, page tagging or 'Web bugs'. 

In the mid-1990s, Web counters were commonly seen — these were images included in a web page that showed the number of times the image had been requested, which was an estimate of the number of visits to that page. In the late 1990s this concept evolved to include a small invisible image instead of a visible one, and, by using JavaScript, to pass along with the image request certain information about the page and the visitor. This information can then be processed remotely by a web analytics company, and extensive statistics generated.

The web analytics service also manages the process of assigning a cookie to the user, which can uniquely identify them during their visit and in subsequent visits. Cookie acceptance rates vary significantly between websites and may affect the quality of data collected and reported. 

Collecting website data using a third-party data collection server (or even an in-house data collection server) requires an additional DNS look-up by the user's computer to determine the IP address of the collection server. On occasion, delays in completing a successful or failed DNS look-ups may result in data not being collected. 

With the increasing popularity of Ajax-based solutions, an alternative to the use of an invisible image is to implement a call back to the server from the rendered page. In this case, when the page is rendered on the web browser, a piece of Ajax code would call back to the server and pass information about the client that can then be aggregated by a web analytics company. This is in some ways flawed by browser restrictions on the servers which can be contacted with XmlHttpRequest objects. Also, this method can lead to slightly lower reported traffic levels, since the visitor may stop the page from loading in mid-response before the Ajax call is made.

Logfile analysis vs page tagging

Both logfile analysis programs and page tagging solutions are readily available to companies that wish to perform web analytics. In some cases, the same web analytics company will offer both approaches. The question then arises of which method a company should choose. There are advantages and disadvantages to each approach.

Advantages of logfile analysis

The main advantages of log file analysis over page tagging are as follows:
  • The web server normally already produces log files, so the raw data is already available. No changes to the website are required.
  • The data is on the company's own servers, and is in a standard, rather than a proprietary, format. This makes it easy for a company to switch programs later, use several different programs, and analyze historical data with a new program.
  • Logfiles contain information on visits from search engine spiders, which generally do not execute JavaScript on a page and are therefore not recorded by page tagging. Although these should not be reported as part of the human activity, it is useful information for search engine optimization.
  • Logfiles require no additional DNS lookups or TCP slow starts. Thus there are no external server calls which can slow page load speeds, or result in uncounted page views.
  • The web server reliably records every transaction it makes, e.g. serving PDF documents and content generated by scripts, and does not rely on the visitors' browsers cooperating.

Advantages of page tagging

The main advantages of page tagging over log file analysis are as follows:
  • Counting is activated by opening the page (given that the web client runs the tag scripts), not requesting it from the server. If a page is cached, it will not be counted by server-based log analysis. Cached pages can account for up to one-third of all page views. Not counting cached pages seriously skews many site metrics. It is for this reason server-based log analysis is not considered suitable for analysis of human activity on websites.
  • Data is gathered via a component ("tag") in the page, usually written in JavaScript, though Java can be used, and increasingly Flash is used. Ajax can also be used in conjunction with a server-side scripting language (such as PHP) to manipulate and (usually) store it in a database, basically enabling complete control over how the data is represented.
  • The script may have access to additional information on the web client or on the user, not sent in the query, such as visitors' screen sizes and the price of the goods they purchased.
  • Page tagging can report on events which do not involve a request to the web server, such as interactions within Flash movies, partial form completion, mouse events such as onClick, onMouseOver, onFocus, onBlur etc.
  • The page tagging service manages the process of assigning cookies to visitors; with log file analysis, the server has to be configured to do this.
  • Page tagging is available to companies who do not have access to their own web servers.
  • Lately page tagging has become a standard in web analytics.

Economic factors

Logfile analysis is almost always performed in-house. Page tagging can be performed in-house, but it is more often provided as a third-party service. The economic difference between these two models can also be a consideration for a company deciding which to purchase.
  • Logfile analysis typically involves a one-off software purchase; however, some vendors are introducing maximum annual page views with additional costs to process additional information. In addition to commercial offerings, several open-source logfile analysis tools are available free of charge.
  • For Logfile analysis data must be stored and archived, which often grows large quickly. Although the cost of hardware to do this is minimal, the overhead for an IT department can be considerable.
  • For Logfile analysis software need to be maintained, including updates and security patches.
  • Complex page tagging vendors charge a monthly fee based on volume i.e. number of page views per month collected.
Which solution is cheaper to implement depends on the amount of technical expertise within the company, the vendor chosen, the amount of activity seen on the websites, the depth and type of information sought, and the number of distinct websites needing statistics.

Regardless of the vendor solution or data collection method employed, the cost of web visitor analysis and interpretation should also be included. That is, the cost of turning raw data into actionable information. This can be from the use of third party consultants, the hiring of an experienced web analyst, or the training of a suitable in-house person. A cost-benefit analysis can then be performed. For example, what revenue increase or cost savings can be gained by analyzing the web visitor data?

Hybrid methods

Some companies produce solutions that collect data through both log-files and page tagging and can analyze both kinds. By using a hybrid method, they aim to produce more accurate statistics than either method on its own. An early hybrid solution was produced in 1998 by Rufus Evison.

Geolocation of visitors

With IP geolocation, it is possible to track visitors location. Using IP geolocation database or API, visitors can be geolocated to city, region or country level.

IP Intelligence, or Internet Protocol (IP) Intelligence, is a technology that maps the Internet and categorizes IP addresses by parameters such as geographic location (country, region, state, city and postcode), connection type, Internet Service Provider (ISP), proxy information, and more. The first generation of IP Intelligence was referred to as geotargeting or geolocation technology. This information is used by businesses for online audience segmentation in applications such online advertising, behavioral targeting, content localization (or website localization), digital rights management, personalization, online fraud detection, localized search, enhanced analytics, global traffic management, and content distribution.

Click analytics

Clickpath Analysis with referring pages on the left and arrows and rectangles differing in thickness and expanse to symbolize movement quantity.
 
Click analytics is a special type of web analytics that gives special attention to clicks.

Commonly, click analytics focuses on on-site analytics. An editor of a website uses click analytics to determine the performance of his or her particular site, with regards to where the users of the site are clicking.

Also, click analytics may happen real-time or "unreal"-time, depending on the type of information sought. Typically, front-page editors on high-traffic news media sites will want to monitor their pages in real-time, to optimize the content. Editors, designers or other types of stakeholders may analyze clicks on a wider time frame to help them assess performance of writers, design elements or advertisements etc.

Data about clicks may be gathered in at least two ways. Ideally, a click is "logged" when it occurs, and this method requires some functionality that picks up relevant information when the event occurs. Alternatively, one may institute the assumption that a page view is a result of a click, and therefore log a simulated click that led to that page view.

Customer lifecycle analytics

Customer lifecycle analytics is a visitor-centric approach to measuring that falls under the umbrella of lifecycle marketing. Page views, clicks and other events (such as API calls, access to third-party services, etc.) are all tied to an individual visitor instead of being stored as separate data points. Customer lifecycle analytics attempts to connect all the data points into a marketing funnel that can offer insights into visitor behavior and website optimization.

Other methods

Other methods of data collection are sometimes used. Packet sniffing collects data by sniffing the network traffic passing between the web server and the outside world. Packet sniffing involves no changes to the web pages or web servers. Integrating web analytics into the web server software itself is also possible. Both these methods claim to provide better real-time data than other methods.

On-site web analytics - definitions

There are no globally agreed definitions within web analytics as the industry bodies have been trying to agree on definitions that are useful and definitive for some time. The main bodies who have had input in this area have been the IAB (Interactive Advertising Bureau), JICWEBS (The Joint Industry Committee for Web Standards in the UK and Ireland), and The DAA (Digital Analytics Association), formally known as the WAA (Web Analytics Association, US). However, many terms are used in consistent ways from one major analytics tool to another, so the following list, based on those conventions, can be a useful starting point:
  • Bounce Rate - The percentage of visits that are single page visits and without any other interactions (clicks) on that page. In other words, a single click in a particular session is called a bounce.
  • Click path - the chronological sequence of page views within a visit or session.
  • Hit - A request for a file from the web server. Available only in log analysis. The number of hits received by a website is frequently cited to assert its popularity, but this number is extremely misleading and dramatically overestimates popularity. A single web-page typically consists of multiple (often dozens) of discrete files, each of which is counted as a hit as the page is downloaded, so the number of hits is really an arbitrary number more reflective of the complexity of individual pages on the website than the website's actual popularity. The total number of visits or page views provides a more realistic and accurate assessment of popularity.
  • Page view - A request for a file, or sometimes an event such as a mouse click, that is defined as a page in the setup of the web analytics tool. An occurrence of the script being run in page tagging. In log analysis, a single page view may generate multiple hits as all the resources required to view the page (images, .js and .css files) are also requested from the web server.
  • Visitor / Unique Visitor / Unique User - The uniquely identified client that is generating page views or hits within a defined time period (e.g. day, week or month). A uniquely identified client is usually a combination of a machine (one's desktop computer at work for example) and a browser (Firefox on that machine). The identification is usually via a persistent cookie that has been placed on the computer by the site page code. An older method, used in log file analysis, is the unique combination of the computer's IP address and the User Agent (browser) information provided to the web server by the browser. It is important to understand that the "Visitor" is not the same as the human being sitting at the computer at the time of the visit, since an individual human can use different computers or, on the same computer, can use different browsers, and will be seen as a different visitor in each circumstance. Increasingly, but still somewhat rarely, visitors are uniquely identified by Flash LSO's (Local Shared Object), which are less susceptible to privacy enforcement.
  • Visit / Session - A visit or session is defined as a series of page requests or, in the case of tags, image requests from the same uniquely identified client. A unique client is commonly identified by an IP address or a unique ID that is placed in the browser cookie. A visit is considered ended when no requests have been recorded in some number of elapsed minutes. A 30-minute limit ("time out") is used by many analytics tools but can, in some tools (such as Google Analytics), be changed to another number of minutes. Analytics data collectors and analysis tools have no reliable way of knowing if a visitor has looked at other sites between page views; a visit is considered one visit as long as the events (page views, clicks, whatever is being recorded) are 30 minutes or less closer together. Note that a visit can consist of one page view, or thousands. A unique visit's session can also be extended if the time between page loads indicates that a visitor has been viewing the pages continuously.
  • Active Time / Engagement Time - Average amount of time that visitors spend actually interacting with content on a web page, based on mouse moves, clicks, hovers and scrolls. Unlike Session Duration and Page View Duration / Time on Page, this metric can accurately measure the length of engagement in the final page view, but it is not available in many analytics tools or data collection methods.
  • Average Page Depth / Page Views per Average Session - Page Depth is the approximate "size" of an average visit, calculated by dividing total number of page views by total number of visits.
  • Average Page View Duration - Average amount of time that visitors spend on an average page of the site.
  • Click - "refers to a single instance of a user following a hyperlink from one page in a site to another".
  • Event - A discrete action or class of actions that occurs on a website. A page view is a type of event. Events also encapsulate clicks, form submissions, keypress events, and other client-side user actions.
  • Exit Rate / % Exit - A statistic applied to an individual page, not a web site. The percentage of visits seeing a page where that page is the final page viewed in the visit.
  • First Visit / First Session - (also called 'Absolute Unique Visitor' in some tools) A visit from a uniquely identified client that has theoretically not made any previous visits. Since the only way of knowing whether the uniquely identified client has been to the site before is the presence of a persistent cookie or via digital fingerprinting that had been received on a previous visit, the First Visit label is not reliable if the site's cookies have been deleted since their previous visit.
  • Frequency / Session per Unique - Frequency measures how often visitors come to a website in a given time period. It is calculated by dividing the total number of sessions (or visits) by the total number of unique visitors during a specified time period, such as a month or year. Sometimes it is used interchangeable with the term "loyalty."
  • Impression - The most common definition of "Impression" is an instance of an advertisement appearing on a viewed page. Note that an advertisement can be displayed on a viewed page below the area actually displayed on the screen, so most measures of impressions do not necessarily mean an advertisement has been view-able.
  • New Visitor - A visitor that has not made any previous visits. This definition creates a certain amount of confusion (see common confusions below), and is sometimes substituted with analysis of first visits.
  • Page Time Viewed / Page Visibility Time / Page View Duration - The time a single page (or a blog, Ad Banner...) is on the screen, measured as the calculated difference between the time of the request for that page and the time of the next recorded request. If there is no next recorded request, then the viewing time of that instance of that page is not included in reports.
  • Repeat Visitor - A visitor that has made at least one previous visit. The period between the last and current visit is called visitor recency and is measured in days.
  • Return Visitor - A Unique visitor with activity consisting of a visit to a site during a reporting period and where the Unique visitor visited the site prior to the reporting period. The individual is counted only once during the reporting period.
  • Session Duration / Visit Duration - Average amount of time that visitors spend on the site each time they visit.It is calculated as the sum total of the duration of all the sessions divided by the total number of sessions. This metric can be complicated by the fact that analytics programs can not measure the length of the final page view.
  • Single Page Visit / Singleton - A visit in which only a single page is viewed (this is not a 'bounce').
  • Site Overlay is a report technique in which statistics (clicks) or hot spots are superimposed, by physical location, on a visual snapshot of the web page.

Off-site web analytics

Off-site web analytics is based on open data analysis, social media exploration, share of voice on web properties. It is usually used to understand how to market a site by identifying the keywords tagged to this site, either from social media or from other websites. 

By using HTTP Referer, webpage owners will be able to trace which are the referrer sites that helps bring in traffic to their own site.

Common sources of confusion in web analytics

The hotel problem

The hotel problem is generally the first problem encountered by a user of web analytics. The problem is that the unique visitors for each day in a month do not add up to the same total as the unique visitors for that month. This appears to an inexperienced user to be a problem in whatever analytics software they are using. In fact it is a simple property of the metric definitions. 

The way to picture the situation is by imagining a hotel. The hotel has two rooms (Room A and Room B).


Day 01 Day 02 Day 03 Total
Room A John John Mark 2 Unique Users
Room B Mark Jane Jane 2 Unique Users
Total 2 2 2 ?

As the table shows, the hotel has two unique users each day over three days. The sum of the totals with respect to the days is therefore six. 

During the period each room has had two unique users. The sum of the totals with respect to the rooms is therefore four. 

Actually only three visitors have been in the hotel over this period. The problem is that a person who stays in a room for two nights will get counted twice if you count them once on each day, but is only counted once if you are looking at the total for the period. Any software for web analytics will sum these correctly for the chosen time period, thus leading to the problem when a user tries to compare the totals.

Web analytics methods

Problems with cookies

Historically, vendors of page-tagging analytics solutions have used third-party cookies sent from the vendor's domain instead of the domain of the website being browsed. Third-party cookies can handle visitors who cross multiple unrelated domains within the company's site, since the cookie is always handled by the vendor's servers. 

However, third-party cookies in principle allow tracking an individual user across the sites of different companies, allowing the analytics vendor to collate the user's activity on sites where he provided personal information with his activity on other sites where he thought he was anonymous. Although web analytics companies deny doing this, other companies such as companies supplying banner ads have done so. Privacy concerns about cookies have therefore led a noticeable minority of users to block or delete third-party cookies. In 2005, some reports showed that about 28% of Internet users blocked third-party cookies and 22% deleted them at least once a month. Most vendors of page tagging solutions have now moved to provide at least the option of using first-party cookies (cookies assigned from the client subdomain). 

Another problem is cookie deletion. When web analytics depend on cookies to identify unique visitors, the statistics are dependent on a persistent cookie to hold a unique visitor ID. When users delete cookies, they usually delete both first- and third-party cookies. If this is done between interactions with the site, the user will appear as a first-time visitor at their next interaction point. Without a persistent and unique visitor id, conversions, click-stream analysis, and other metrics dependent on the activities of a unique visitor over time, cannot be accurate.

Cookies are used because IP addresses are not always unique to users and may be shared by large groups or proxies. In some cases, the IP address is combined with the user agent in order to more accurately identify a visitor if cookies are not available. However, this only partially solves the problem because often users behind a proxy server have the same user agent. Other methods of uniquely identifying a user are technically challenging and would limit the trackable audience or would be considered suspicious. Cookies are the selected option because they reach the lowest common denominator without using technologies regarded as spyware.

Secure analytics (metering) methods

It may be good to be aware that the third-party information gathering is subject to any network limitations and security applied. Countries, Service Providers and Private Networks can prevent site visit data from going to third parties. All the methods described above (and some other methods not mentioned here, like sampling) have the central problem of being vulnerable to manipulation (both inflation and deflation). This means these methods are imprecise and insecure (in any reasonable model of security). This issue has been addressed in a number of papers, but to-date the solutions suggested in these papers remain theoretic, possibly due to lack of interest from the engineering community, or because of financial gain the current situation provides to the owners of big websites. For more details, consult the aforementioned papers.

Cyberpsychology

From Wikipedia, the free encyclopedia

Cyberpsychology (also known as Internet psychology or web psychology) is a developing field that encompasses all psychological phenomena associated with or affected by emerging technology. Cyber comes from the word cyberspace, the study of the operation of control and communication; psychology is the study of the mind and behavior.

Overview

Cyberpsychology is the study of the human mind and behavior and how the culture of technology, specifically, virtual reality, and social media affect them. Mainstream research studies focus on the effect of the Internet and cyberspace on the psychology of individuals and groups. Some hot topics include: online identity, online relationships, personality types in cyberspace, transference to computers, addiction to computers and Internet, regressive behavior in cyberspace, online gender-switching, etc. Media Psychology is an emerging specialty and the Society for Media Psychology and Technology of the American Psychological Association, i.e., APA division 46 includes many cyber-psychologists among its members.

While statistical and theoretical research in this field are based around Internet usage, cyberpsychology also includes the study of the psychological ramifications of cyborgs, artificial intelligence, and virtual reality. Although some of these topics may appear to be the stuff of science fiction, they are quickly becoming science fact as evidenced by interdisciplinary approaches in the fields of biology, engineering, and mathematics. The field of cyberpsychology remains open to refinement, including inquiry into the nature of current and future trends in mental illness associated with technological advances.

It was around the turn of the millennium that the United States broke the 50 percent mark in Internet use, personal computer use, and cell phone use. With such a broad exposure to computers and their displays, our perceptions go beyond objects and images in our natural environment to now include graphics and images on the computer screen. As the overlaps between man and machine expand, the relevance of human–computer interaction (HCI) research within the field of cyberpsychology will become more visible and necessary in understanding the current modern lifestyles of many people. With the rising number of Internet and computer users around the world, computer technology's effects on the human psyche will continue to significantly shape both our interactions with each other and our perceptions of a world literally "at our fingertips".

Social media and cyberpsychological behavior

Social media use is rapidly growing. What is its impact? That is what cyberpsychology seeks to find out.
 
Although cyberpsychology includes other technological platforms such as cybertherapy and the ramifications of virtual reality, the following section is focused on the effect of social media on human behavior, as it is the most prevalent platform for technology use. 

Facebook, the leading online social media platform globally, affects users' psychological status in multiple ways. Facebook follows the pattern of one-to-many communication which allows users to share information about their lives, including social activities and photographs. This feature was enhanced in 2012, when Facebook Messenger was implemented to allow users more one-on-one communication merging with the Facebook Chat feature. While Facebook users enjoy the sense of being connected, frequent use of Facebook is threatening users' mental health. Comparison, low self-esteem, depression, loneliness, and negative relationships are all possible detrimental consequences associated with frequent use of Facebook or other social media platforms.

Comparison and low self-esteem

Due to the nature of Facebook, Instagram, Twitter, etc., social media users often compare the lives of their friends with their own. This can be deceptive when the social media user sees only the joyous or entertaining experiences in a friend's life and compares them to his or her own lesser experiences. According to a study published in the Personality and Social Psychology Bulletin, Alexander Jordan, and his colleagues at Stanford University asked 80 freshmen to report whether they or their peers had recently experienced various negative or positive emotional events. Consistently, participants underestimated how many negative experiences ("had a distressing fight", "felt sad because they missed people", etc.) their peers were having while overestimating how much fun ("going out with friends", "attending parties", etc.) these same peers experienced. A similar study conducted at Stanford University showed that underestimating peers negative experiences correlated with greater loneliness and lower overall life satisfaction. Inviting constant comparisons inevitably lowers self-esteem and feelings of self-worth, hence, Facebook and other social media accounts appear to exploit an Achilles' heel of human nature.

Depression

Decreased self-esteem can increase depression. Facebook specifically is criticized for causing depression, especially among teenage users. A study conducted by Michigan University consisting of 82 Facebook users over a two-week period, concluded that frequent Facebook use invoked feelings of depression and inadequacy. Social psychologist Ethan Kross, the lead author of the study, stated that the research tracked (on a moment-to-moment basis throughout the day) how a person's mood fluctuated during time spent on Facebook, and whether or not they modified their Facebook usage. Results suggest that as participants spent more time on Facebook, their feelings of well-being decreased and feelings of depression increased. Similarly, a study conducted at the University of Pittsburgh consisting of 1,787 participants between 19–32 years of age showed that participants in the highest quartile for social media site visits per week were at an increased likelihood of experiencing depression.

Social isolation and ostracism

According to Maslow's hierarchy of needs, social interaction and belonging are important aspects of psychological and emotional well-being. Although it is relatively common to have hundreds of friends on Facebook, it is unlikely that any one individual has that many solid person to person relationships. This can create social disconnect. Different from meeting friends face to face, chatting with an acquaintance or a total stranger online can increase feelings of loneliness instead of increasing feelings of social connection. This may be because Facebook uses of the "like" and "comment" button as means of interaction is too brief and does not show lasting concern. In the 2016 University of Pittsburgh study mentioned previously researched found that excessive social media usage increased feelings of social isolation, that is, as authentic social interactions were replaced by virtual relationships. Additionally, a 2011 study conducted at the University College of London examined the fMRI brain scans of 125 frequent Facebook users and found that the size of an individual's online social network is closely linked to brain structure associated with social cognition. This research provides evidence that social media platforms, such as Facebook, are changing the way people socialize, and that it may not be fulfilling social needs.

Additionally, 2012 research data from Purdue University indicates that social rejection or ostracism in an immersive virtual environment threatens four basic fundamental needs (i.e., belonging, control, self-esteem, and meaningful existence) and thus, has a negative impact on affect (emotion). This research suggests the possibility that individuals who use virtual environments (e.g., MMORPGs, massively multiplayer online role-playing games) may have everyday experiences with ostracism in these environments. This study presents the first known evidence of ostracism in virtual environments and revealed the effects of ostracism in virtual environments as powerful with effect sizes medium to large in magnitude.

Negative relationships

Facebook has also been linked to the increased divorce and break-up rates. Couples that fit this trend tend to express feelings of jealousy when their partner comment on a person of the opposite gender's wall. To cope with the uncertainty of a suspected romantic relationship, partner surveillance on Facebook is becoming more popular. However, skepticism between couples may inevitably cause the end of relationship. Russell B. Clayton, Alexander Nagurney and Jessica R. Smith, surveyed 205 Facebook users aged 18–82 to determine if frequent Facebook use predicated negative relationship outcomes. Furthermore, the researchers examined length of relationship as a moderator variable in the aforementioned model. The results indicate that a high level of Facebook usage is associated with negative relationship outcomes, and that these relationships are indeed mediated by Facebook-related conflict. This series of relationships only holds for those who are, or have been, in relatively newer relationships of 3 years or less. The current study adds to the growing body of literature investigating Internet use and relationship outcomes, and may be a precursor to further research investigating whether Facebook use attributes to the divorce rate, emotional cheating, and physical cheating.

It is important to note that these findings do not demonstrate causality. A similar study demonstrated that relationship maintenance behaviors, such as surveillance and monitoring, were indicators of current levels of trust within the relationship. This suggests that certain behaviors on social media may be predicting these negative relationships, rather than causing them. Further, the study also showed that Facebook can be a tool in strengthening and reaffirming a relationship, as it allows for positive expressions of trust, affection and commitment.

Fear of missing out (FOMO)

A byproduct of social media addiction is the "fear of missing out", or FOMO. This fear develops from a user's repetitive and obsessive status-checking of "friend" status updates and posts related to social events or celebrations resulting in a feeling of being "left out" if these events are not experienced. There is also the closely related fear of being missed (FOBM), or the fear of invisibility. This fear involves an obsessive need to provide constant status updates on one's own personal, day-to-day life, movements, travel, events, etc. unable to "un-plug". There is evidence that suggests this type of anxiety is a mediating factor in both increased social media use and decreased self-esteem.

Sleep deprivation

Research suggests that social networking can lead to sleep deprivation. A study commissioned by Travelodge hotels in the United Kingdom surveyed 6,000 adults to explore the nation's bedtime habits and key findings revealed 'we' have become a nation of 'Online-A-Holics'. On average each night Britons are spending 16 minutes in bed socially networking with pals – with the peak chatting time being 9:45 pm. This time spent social networking is affecting Britons sleep quota as on average respondents reported they are getting just six hours and 21 minutes sleep per night. (This is one hour and 39 minutes below the recommended quota of eight hours of sleep per night.) Further research findings revealed that 65% respondents stated the very last thing they do before nodding off at night is checking their mobile phone for text messages. On average Britons will spend around nine minutes every night texting before falling asleep, and four out of ten adults reported they have a regular text communication with friends in bed every night.

Addictive behavior

Recent studies have shown a connection between online social media such as Facebook use to addictive behaviors, emotion regulation, impulse control, and substance abuse. This may be because people are learning to access and process information more rapidly and to shift attention quickly from one task to the next. All this access and vast selection is causing some entertainment seekers to develop the constant need for instant gratification with a loss of patience. Results from a survey of university undergraduates showed that almost 10% met criteria for what investigators describe as "disordered social networking use". Respondents who met criteria for "Facebook addiction" also reported statistically significant symptoms commonly linked to addiction, such as tolerance (increased Facebook use over time), withdrawal (irritability when unable to access Facebook), and cravings to access the site. "Our findings suggest that there may be shared mechanisms underlying both substance and behavioral addictions," Hormes added.

A results of a study in the journal of Cyberpsychology, Behavior, and Social Networking (2014) provided evidence that the prevalence of internet addiction varies considerably between countries and is inversely related to quality of life.

Eating disorders

A study conducted by the University of Haifa in 2011 showed that the more time teenage girls spend on Facebook, the higher their risk of developing negative body images and eating disorders. A more recent study by researchers at Florida State University found a correlation between Facebook use and disordered eating.

Researchers examined the relationship between college women's media use and two sets of variables (disordered-eating symptomatology and a set of related variables, including body dissatisfaction and drive for thinness) and assessed the relationship between college men's media use and their endorsement of thinness for themselves and for women. We expected to find consumption of thinness-depicting and thinness-promoting (TDP) media related to disordered eating and thinness endorsement, with the social learning process of modeling accounting for the relationships. For women, media use predicted disordered-eating symptomatology, drive for thinness, body dissatisfaction, and ineffectiveness. For men, media use predicted endorsement of personal thinness and dieting and select attitudes in favor of thinness and dieting for women.

Social media and ADHD

In the view of Dr. Robert Melillo, a chiropractic neurologist and founder of the Brain Balance Program, the environment strongly affects the development of ADHD. Although many factors contribute to ADHD (including genes, teratogens, parenting styles, etc.) a sedentary lifestyle centered on television, computer games, and mobile devices may increase the risk for ADHD. Specifically, "When kids play computer games, their minds are processing information in a much different way than kids who are, say, running around on a playground...Recent studies have shown that playing computer games only builds very short-term attention that needs to be rewarded frequently." 

Clinical psychologist Michelle Frank, who specializes in the diagnosis and treatment of ADHD in college students, young adults, and women, stated, "The ADHD brain is already one that struggles with motivation, activation, organizing behaviors, managing time, and maintaining focus...Technology, left unmanaged, makes these struggles considerably more difficult. The unique challenges that result are prime vulnerabilities to the common pitfalls of technology use." Frank explained that an individual with ADHD has structural, functional, and chemical differences compared to a neurotypical brain. 

These differences explain why ADHD individuals may be more prone to engage in risky or unhelpful behaviors online and struggle to control spontaneous impulses without thinking of future consequences. The ADHD brain is primed to seek out more stimulation that neurotypical brains, and technology is a ripe source of engagement. For these reasons, there is an emerging body of research that suggests that internet addiction and unhealthy social media activity may be more prevalent in ADHD individuals. Another compounding piece of the social media puzzle is related to time management. Individuals with ADHD have trouble with awareness of time, procrastination, avoidant behaviors, and staying on task. Frank explains that ADHD individuals often misperceive time and have trouble thinking into the future; NOW is the dominant time zone. Therefore, time management is a challenge.

In addition, Ju-Yu Yen at Kaohsiung Medical University Hospital in Taiwan discovered that being easily bored rather than easily distracted is the core symptom of inattentive ADHD. Internet activities are based highly upon their interactivity levels and immediate response rates; these quick actions relieve the feeling of boredom. In other words, the internet becomes a coping mechanism for those who cannot focus. Research concluded that male college students are more likely to be screened positively for adult ADHD; however, the overall association between Internet addiction and attention deficit is more significant in females.

Positive correlates of social media use

Research conducted by Australian researchers demonstrated that a number of positive psychological outcomes are related to Facebook use. These researchers established that people can derive a sense of social connectedness and belongingness in the online environment. Importantly, this online social connectedness was associated with lower levels of depression and anxiety, and greater levels of subjective well-being. These findings suggest that the nature of online social networking determines the outcomes of online social network use.

Social media and memes

A consequent component of the social media experience is internet memes. As the internet acquired its own variety of memes and language, intellectual convergence became apparently existent in the minds of internet users. Digital inhabitants have voluntarily created various requirements and standards that must be met for a successful interaction. The distinguishing judgment of others is implied in the sharing of memes, and this judgment leads to differences in social existence. The phenomenon of information infection through internet memes can influence the ways internet users will acquire and interpret data. This in turn affects their participation, interactions, and behaviors online and offline. 

While internet memes appear to be simple pop culture references, they allow a glimpse into the formation of culture and language when more closely observed. These snippets of pop-culture serve to demonstrate how the collective mind of internet users relate to one another through seemingly ridiculous images and text. Despite the absurdity of some memes, they allow connections to be built through a shared experience. This shared experience is central to the development of the culture of the modern internet and those who primarily connect with others through it. This also shapes the culture of future generations as they become more enmeshed within this globalized culture and psyche.

Psychotherapy in cyberspace

Psychotherapy in cyberspace, also known as cybertherapy or e-therapy, is a controversial matter with a history of doubts related to efficiency, validity and effectiveness. The first instance of this practice did not include interaction with a human, but rather a program called ELIZA, which was designed by Joseph Weizenbaum to answer questions and concerns with basic Rogerian responses. ELIZA proved to be so effective that many people either mistook the program for human, or became emotionally attached to it.

In today's most common computer-mediated form of counseling, a person e-mails or chats online with a therapist (online counseling). E-therapy may be particularly effective when conducted via video conferencing, as important cues such as facial expression and body language may be conveyed albeit in a less present way. At the same time, there are new applications of technology within psychology and healthcare which utilize augmented and virtual reality components—for example in pain management treatment, PTSD treatment, use of avatars in virtual environments, and self- and clinician-guided computerized cognitive behavior therapies. The voluminous work of Azy Barak (University of Haifa) and a growing number of researchers in the US and UK gives strong evidence to the efficacy (and sometimes superiority) of Internet-facilitated, computer-assisted treatments relative to 'traditional' in-office-only approaches. The UK's National Health Service now recognizes CCBT (computerized cognitive behavioral therapy) as the preferred method of treatment for mild-to-moderate presentations of anxiety and depression. Applications in psychology and medicine also include such innovations as the "Virtual Patient" and other virtual/augmented reality programs which can provide trainees with simulated intake sessions while also providing a means for supplementing clinical supervision. 

Many of the current controversies related to e-therapy have arisen in the context of ethical guidelines and considerations. In the U.S. there are special circumstances which impact widespread online services among licensed health/mental health professionals given that each of 50 states has their own licensing and regulatory systems, and for most professions practitioners are limited to practicing 'within their state', with the recipient's location determining 'where the service is received' and spurring ongoing debate about restricted access and antiquity of the license system. But the applications and research expand at a rapid rate, and areas of research, practice, and education within the world of 'psychotherapy' have been exploding – especially with all of the research and experience demonstrating the value of technology/Internet assisted applications.

In popular culture

  • Lisa Kudrow's Web-based situation comedy Web Therapy, in which Kudrow's unaccredited and unscrupulous character Fiona Wallice conducts therapy sessions using iChat, explores many of the ethical and practical issues raised by the prospect of psychotherapy conducted via Internet video chat.
  • Patricia Arquette recurs as FBI Special Agent in Charge Avery Ryan, a cyberpsychologist, in CSI: Crime Scene Investigation. She also headlines the spinoff series CSI: Cyber in the same role.
  • Forensic anthropologist Dr. Temperance Brennan and Special Agent Seeley Booth in Fox Network's hit television series, Bones, practice cyberpsychology by collecting information from suspects' social media accounts to analyze personality, communications, and possible motives to help apprehend the criminal.
  • Sketch comedy group Studio C pokes fun at different online personalities created by social media and how social media posting impacts dating relationships in sketches entitled "Facebook Friends Song" and "Don't Change Your Facebook Status".

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...