For more details about the workshop, please visit workshop website at

Session I - Scope and Vision    Monday May 23 1:00 PM - 2:25 PM
Session Chair: George Percivall, OGC



Expanding GeoWeb to an Internet of Things

George Percivall, Nadine Alameh

Connecting our world with accessible networks is scaling to trillions of everyday objects. The Internet of Things, Pervasive Computing, Sensor Web are research names for this development. Planetary Skin, Smarter Planet and CeNSE are several corporate names. The Internet will be augmented with mobile machine-to-machine communications and ad-hoc local network technologies. At the network nodes, information about objects will come from barcodes, RFIDs, and sensors. The location of all objects will be known. This workshop seeks to explore the role of location in expanding GeoWeb to an Internet of Things.
The workshop seeks presentations on functions enabled by geographic location and to location relative to surrounding objects. Most of the objects will be indoor in a 3D setting. The workshop also seeks presentations on relevant technologies such as location determination, geocoding, schemas for points of interest, ad-hoc network formation based on location, processing of information of the objects to detect phenomena of interest and location based services. Technology standards will be important for interoperability at this scale, e.g., OpenLS, CityGML, and Sensor Web Enablement standards from the OGC.


Planetary Skin Institute

Planetary Skin Institute ALERTS -Automated Landchange Evaluation, Reporting and Tracking System

J.D. Stanley (Chief Technology Officer)

In December of 2010 the Planetary Skin Institute announced the beta release of ALERTS – Automated Land change Evaluation,Reporting and Tracking System.
ALERTS is a decision support Evaluation, Reporting and Tracking system for near real-time global land use, land cover change, and disturbance detection and analysis. It provides global coverage of deforestation or other land change events and offers users a number of useful tools for identifying, characterizing and responding to disturbances.
This public beta release of ALERTS was a direct result of the Planetary Skin Institutes’ community swarming efforts with NASA, INPE, MINAM, Cisco, University of Minnesota, and Terrestrial Carbon Group. The team spent 12 months designing an immersive decision support environment to facilitate Planetary Skin Institute’s mission for pioneering emerging R&D initiatives across sectors and disciplines for the monitoring and managing of scarce resources.
Further by incorporating over 200 layers that span spatial and temporal land related themes ALERTS empowers the users to go beyond disturbance detections and assess and analyze projected transitional risk scenarios.

Keywords: Land change detection, planetary skin, carbon stocks, transitional risk


George Washington University

Physical World as an Internet of Things

Prof. Dr. Simon Berkovich

A concept of the physical Universe that does not address the issue of the difference in the behavior of dead and living matter is not just incomplete, it simply cannot be correct. We have developed a cellular automaton model of the Universe where the appearing material configurations share the information control under global content-addressable holographic memory. As a result, biological information processing is organized as Cloud Computing [1]. With the rise of the Internet there is no doubts that such an organization is much more efficient; other control arrangements for material things may be simply not workable.
The Internet construction of the physical world is a sort of realization of quantum computing. The viability of this construction is most dramatically revealed by the phenom enon of quantum nonlocality - instantaneous non-signaling correlation of distant events. Nonlocality is intrinsic to the sliced processing of holography, which brings in instantaneous interactions through common memory rather than performs gradual signaling through message passing. Traditional thinking cannot accommodate nonlocality into the paradigm of the physical world. At the moment, the given construction presents the one and only available operational explanation of this inconceivable phenomenon.

The holographic Internet milieu sets up different control patterns for molecular structures depending on their size. Small particles get immediate holographic feedbacks by returning beam establishing an interactive holography environment for quantum mechanics behavior [2]. The feedbacks for macromolecules ("aperiodic crystals" [3]) are richer as their highly developed conformational oscillations furnish access keys to the holographic storage; so, in contrast to small particles, the behavior of macromolecules is governed additionally by signals from the bulk of the holographic memory. Drastic distinctions in the behavior of dead and living objects are due to different feedbacks for small and large molecules produced by the Internet infrastructure of the material world.

Functioning of complex systems ordinarily requires inflows of two types of entities: information signals and actuation impetuses. The latter aspect in relation to the motility of macromolecules has been considered in [4]. According to [3], the purpose of feeding is not the acquisition of energy but intake of "negative entropy". The essential point in metabolism is freeing from all the entropy that an organism cannot help producing while alive. The primary hypothesis about the acquisition of energy by living organisms is that the inside burning of the sugar in one way or another provides the motive power to the muscle. Yet the amount of energy obtained with the food does not seem enough for the work the organisms perform; for example, some beetles would need daily intake of food twice their own mass. Furthermore, it is not known how exactly the energy-providing reactions are coupled to the mechanical precision and how the control signals arriving at macromolecules are transformed into purposeful actions [5].

The total amount of power required by all the living organisms on Earth can be commensurable (within some orders of magnitude) to the total amount of power used by modern human civilization. In corresponding terms, it can be said that living organisms consume the ultimate source of energy - solar radiation in the form of "biomass". This common view is confronted considering a new source of energy for biochemical motions by relating it to the external clock of the physical Internet. This kind of energy can be extracted from the pushing pulses of this clocking mechanism, the so-called "hot-clocking" effect [6], and concentrated by the mode of the parametric resonance [4]. This kind of surmised powering for the biochemical activities effectively intermingles information and energy processes. Figuratively speaking, the proposed machinery can be seen as USB port functionality incorporated in the quantum computer of the Universe.

For the Internet of the physical world the considered clocking mechanism introduces an unexpected triggering condition at its working frequency of 1011 Hz. Actually, it has been noticed that electromagnetic waves in the corresponding millimeter range produce various harmless, but otherwise unexplainable, biological effects that cannot be understood either in terms of heating or through direct action of electric fields; "it follows that the electromagnetic wave acts as a trigger to events for which the biological system is already prepared"[7]. Since biological objects operate under 1011 Hz clock cycle they might be affected by a novel environmental factor - gigahertz radiation from the vast spread of cellular phones. Conventional physics does not foresee how this radiation can influence biological objects, while the massive epidemiological studies would take decades [8]. In the meantime, it is important to keep in mind that HF electromagnetic radiation could interfere with biological processes as long as they are driven by 1011 Hz clock of Cloud Computing.

Keywords: Cyber-physics, Internet of Things, Quantum Computing, Cloud Computing, Bioinformatics



Future Work on the Ushahidi Platform to Use QR Codes to Tag Buildings and Places with Application to Crisis Scenarios

Jon Gosier


Skyhook Wireless

What to Do with 500M Location Requests a Day?

Kipp Jones, Richard Sutton

Skyhook Wireless provides hybrid positioning to millions of mobile devices around the world. Using an approach that integrates cell, WiFi, and GPS signals, the system services over 500 million location requests daily. This results in a massive and perpetually growing artifact of device locations anchored in time and place. Using this time-stamped location data, we are able to measure aggregated mobile device activity with extreme local accuracy, to any required resolution, across thousands of cites worldwide.
Providing location services to such a large population of devices allows Skyhook to continuously improve positioning quality by reconciling signal maps returned from adjacent requests. It also provides an unparalleled tool for quantifying social behavior in space and time. We describe one analytical output of these data SpotRank - which presents a normalized week of discrete, measured hours across the entire global Skyhook service area.
SpotRank provides a method to compare and analyze locations aggregated to .001 decimal degree tiles (approximately 1 hectare at mid latitudes) in 1-hour increments. The SpotRank canonical week provides an averaged measure of activity for each tile-hour: 168 hours across more than 10 million tiles. This architecture permits many creative comparisons, such as how a typical activity level varies between Monday at 9AM and Friday at 9AM for any tile in our coverage area. These normalized data may also be compared using tiles in disparate cities or countries. With these data as the baseline, many predictive and anomalous behavior analyses are possible, using SpotRank standalone metric or in concert with local data sources.

Keywords: Hybrid positioning system, SpotRank, Skyhook, mobile activity

Session 1 - Scope and Vision (Cont.)
Session 2 - Enabling Technology
Session 2a - Object location, identity and function   Monday May 23 2:40 PM - 4:30 PM
Session Chair: Richard Barnes, BBN



National Broadband Map to facilitate IOT/M2M Deployment

Michael Byrne



Overview/Survey Presentation

Richard Barnes


Quova Inc.

Geo-Locating Things on the Internet

Miten Sampat (VP of Product Strategy)

With an ever-increasing dominance of the Internet as the conduit for social, commercial, governance, and research activity; determining the physical location of end points is critical. Physical location is an essential piece of context that informs decision-systems of numerous geo-derived dimensions. For example, the location of a user performing an e-commerce transaction enables the merchant to calculate applicable federal and state level taxes. From casual applications such as content localization to critical applications such as E-911 and cyber defense; geo-locating end-points is vital for the design, delivery, and optimization of services on the Internet.
However, the Internet infrastructure was developed in an organic fashion without a master plan or design that foresaw the importance of geography. Every transaction conducted over the Internet requires a source and destination IP address, which make it a pervasive classifier. IP addresses are to the Internet as Street addresses or Postal Codes are to the real world; and can therefore serve as the basis of a co-ordinate system to the neo-geography of the Internet. Search engines, content delivery networks, content providers, e-commerce intermediaries, advertising networks, fraud prevention systems, and web analytics providers rely on IP-geolocation solutions to enhance their services today. Through this tech talk, the author will provide an overview of technical methods that form the basis of geo-locating things using IP addresses, and outline pros and cons of the state of the art. The talk will also outline emerging technologies that point the way forward to enhance the precision and accuracy of current methods.

Miten Sampat is currently VP of Product Strategy at Quova, enabling some of the largest web companies understand "where" their users come from. Before Quova, Miten was Chief Architect & CTO at Feeva where he developed technology to enable ISP's to provide metadata for targeted online advertising in a privacy friendly manner. Prior to Feeva, Miten co-founded & led the SeeVT project at the Center for HCI at Virginia Tech that conducted R&D on handheld location based systems, and developed one of the early implementations of Wifi(R) location sensing. Miten also worked at Reliance Communications in India to design, develop, and introduce value-added local services to mobile consumers in India. He has a BS & MS in Computer Science from Virginia Tech, where he was awarded the Outstanding Graduate Student of the Year.

Keywords: IP geo-location, location-based

Session 2 - Enabling Technology
Session 2b - Spatial models: indoor    Tuesday May 24 1:00 PM - 2:20 PM
Session Chair: Steve Smyth, MobileGIS


MobileGIS Ltd

Site and Building Directories and Navigation

Carl Stephen Smyth (Director)

Many of us have an intuition that we should be able to extend the technical and commercial success of road navigation in large geographic spaces to smaller spaces such as parks, shopping malls, business estates, airports, train stations, crime scenes, disaster sites, and individual buildings. Designing real applications leads to three key technical and business questions:
* "What are the requirements?" There are clear differences in comparison with road navigation. Smaller spaces have a human-scale level structure embedded in 3-dimensional space. Visualisation and analysis can be as important as navigation itself. Does the turn-by-turn guidance model still make sense?
* "Where do models come from?" Smaller spaces have complex structure that are complex and can change frequently. Multiple sources inevitably have semantic and quantitative inconsistencies. How do you find content? How do you integrate content? How do you update content?
* "How do you locate a mobile device in smaller spaces?" Compared to road navigation, the precision requirements are tighter, the difficulties in radio propagation are extreme, and level information within structures is essential.
Consideration of these questions in a practical application at the Italian fire training centre at Montelibretti provides some answers.

Keywords: Indoor navigation, building directories, indoor location



Navigation-to-Thing and Highly-Context-Focused 'Around Me' Use Cases

Paul Bouzide

The models for representing, maintaining and using "navigable" geographic features are evolving from a 2D centerline roadway model, through a highly detailed 3D pedestrian, indoor and multimodal model and into a Internet of (Locatable) Things. As this evolution proceeds, the volume of data that can be processed and delivered to end user applications could reach an untenable torrent, both from a human cognition as well as a machine resource perspective.

The key as always is information, not just data. Contextualized interpretation, not just a collection of undifferentiated ground truth facts. What's needed at the edges of the GeoWeb - particularly for relatively network and processing challenged mobile devices - is the notion of "byte-sized" (pun intended) content that's "right-sized" for each individual actor based on highly dynamic personal or organizational usage contexts.

It's clear that edge applications will continue to play a role in providing such a contextual filter. Less obvious is how other GeoWeb participants will also provide contextual value. The application developer interface to a geodata provider is a pathway for application development time, product creation time and run time information exchange. This exchange will inform the processes and business rules that a data provider uses to prioritize the gathering, processing and correlation of observations, the mediation of geodata product quality level guarantees, and the delivery models for the application-ready features themselves. The effectiveness of this pathway will depend on low processing latencies, not only between observation detection and feature change availability, but also between an end user's context and what features are provided at what levels of detail.

There is ample precedent in the current vehicle navigation ecosystem for leveraging this pathway to make the resulting user experience compelling and economically viable. Moving to an integrated 3D model of the built and natural world as a framework for an Internet of Things will require enriching and formalizing this interface in order to build contextual value into the GeoWeb.

Keywords: Geoweb; navigation; context; latency



Building Information Modeling

Geoff Zeiss (Director, Utility Industry Program)

Using digital design models has been a common practice in the manufacturing industry for decades. Project teams at companies such as Boeing and Toyota have placed digital models at the core of their collaborative, concurrent engineering processes. The same approach, called building information modeling (BIM), is increasingly being adopted by architecture, engineering, and construction (AEC) service providers for building and infrastructure projects. Unlike CAD, which uses software tools to generate digital 2D and/or 3D drawings, BIM facilitates a new way of working: creating designs with intelligent objects that enables cross-functional project teams in the building and infrastructure industries to collaborate in a way that gives all stakeholders a clearer vision of the project. Models created using software for BIM are intelligent because of the relationships and information that are automatically built into the model. Components within the model know how to act and interact with one another. BIM not only enables engineers architects and construction firms to work more efficiently, but creates a foundation for sustainable design, enabling designers to optimize the environmental footprint of a structure during the design phase. Convergence is breaking down the barriers between technical disciplines. The integration of BIM, geospatial, physical modeling and 3D visualization provides a framework of interoperability that enables an intelligent synthetic model of entire urban environments.

Geoff Zeiss has more than 20 years experience in the geospatial software industry and 15 years experience developing enterprise geospatial solutions for utilities, communications, and government. His interests include streamlining infrastructure management workflow, open source geospatial, and converged BIM/CAD/GIS/3D simulation solutions. Geoff was Director of Product Development at MCI VISION* Solutions which pioneered RDBMS-based spatial data management, CAD/GIS integration, and data versioning. He has been directly involved in some of the largest successful implementations of geospatial network documentation/records management systems in the utility and telecommunications sectors. Geoff is a frequent speaker at geospatial events around the world including Where 2.0, GITA (US, Australia, Japan), FOSS4G, GeoBrazil, Map Middle East, URISA, Location Intelligence, and Map World Forum and received a Speaker Excellence Award at GITA in 2009.

Keywords: BIM, convergence



Building 3D Models from Images

Eyal Ofek

Session 2 - Enabling Technology
Session 2c - User applications
Session 3 - From R&D to persistence/commercialization   Tuesday May 24 2:30 PM - 4:30 PM
Session Chair: George Percivall, Nadine Alameh, OGC



Integrating 3D Data in Service-based Visualization Systems

Jan Klimke, Dieter Hildebrandt, Benjamin Hagedorn, Jrgen Dllner

Georeferenced data is available from a wide range of sources, e.g., Directory Services, Sensor Observation Services, Web Feature Services or even proprietary interfaces. Many of the data originating from an Internet of Things will be threedimensional representing outdoor as well as indoor geographic features and their properties. Based on this data, its integration, and its visualization totally new applications and systems could be designed and implemented supporting various applications domains. Recent work in the area of service-based 3D visualization enables high-quality visualization of complex 3D geodata, e.g., 3D city models and 3D indoor building models, on thin clients as well as mobile devices such as smartphones and tablets. This work uses a service-based, image-based visualization approach that decouples the server-side resource-intensive management and rendering of complex, massive 3D geodata from client-side display functionalities: A Web View Service provides image representations of a 3D scene; these images, which can contain different types of information per pixel, are transmitted to a client application that can reconstruct a 3D representation of this scene.

In this talk, we will describe how to combine 3D geodata originating from the Internet of Things with this service-based approach in a way that allows for the interactive exploration of and interaction with 3D worlds and objects of interest. In detail, this 3D geodata can be integrated into the visualization process a) at the rendering stage of a portrayal service, b) through an image post processing step or c ) in the client application itself. Moreover, this data can be visually represented directly by modifying the appearance of existing features, e.g., for visualizing measurements, or indirectly by introducing additional objects, e.g., icons, into the 3D scene. We will discuss advantages and disadvantages of these different approaches for implementing visualization applications using live geodata sources.

Keywords: Geovisualization, service-based visualization, geodata integration


1Spatial Group Limited, Fondazione Graphitech

An Internet of Places Navigating the Web in Space-Time

Paul Watson, Giuseppe Conti, Federico Prandi

We are becoming increasingly accustomed to accessing content in relation to real-world locations via specialized geobrowsers, mobile apps and Web 2.0 mash-ups. The variety of content and distribution channels increases daily public web services, sensor networks and social networking sites. However, each of todays applications is hard-wired to specific data and requirements, severely limiting its potential for reuse. Moreover, the linking of these applications to and from other relevant digital resources in an integrated way is not possible; the lack of native spatiotemporal support at the Web level precludes geographical or location-based contextualisation of most digital resources available via the Internet. This paper presents a vision for the next generation of intelligent, web-based applications, capable of delivering context-aware, real-time access to large data repositories by providing technology to organize, filter and explore Web content from every domain using the same intuitive, user-driven and spatiotemporal metaphor. Association of spatiotemporal context to, or inference of it from, Web resources, allowing others to discover them and combining them in new ways requires not only generic, Web native, spatiotemporal data models, flexible data encoding, query and transmission mechanisms, but novel data crawling and indexing methods. It also mandates a new user search and service delivery paradigm which embeds both existing and new digital resources in a virtual and semantic fabric of space-time which can be searched and explored simply by looking into the virtual world using geobrowsers and augmented reality devices. Together, these facilities give rise to a new class of application which continuously offers new data and services relevant to the users location, time and task as they browse this is what we call the Internet of Places.


Northrop Grumman

Sensor Web Standards and the Internet of Things

Scott Fairgrieve, Stefan Falke

Sensors are a key enabler in the realization of an Internet of Things; they empower us to better understand the state of the world around us and to discover and glean information about objects and actions that drive that world. Many of the objects we associate with the Internet of Things are sensor-based systems, contain sensors as key components (e.g. buildings, vehicles, appliances, etc.), or require sensors in order to be discovered and located. The measurements and information from those sensors are what provide much of the Internet of Things with
meaningful data. RFID chips, QR codes, and other technologies facilitate tagging, identifying, and locating objects, but making the presence of these tagged objects and their associated information known to the broader world ultimately requires sensors such as RFID readers and mobile device cameras and standard mechanisms for describing and disseminating that information. Keeping the importance of sensors in mind, this presentation explores the applicability of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards
to help build and drive the Internet of Things by standardizing the way in which sensors and sensor data are described, discovered, accessed, and controlled. SWE provides extensive support for describing the location of sensors and their observations, and this location information is a key aspect of data within the Internet of Things,
allowing both human users and intelligent objects to know where they are, what they do, and what objects and data are available around them. This presentation describes how SWE-based sensor description and location information and the spatial relationships derived from that information can be applied in a variety of novel applications to facilitate an Internet of Things.

Keywords: Sensor web, Internet of Things (IoT), OGC, SWE, standards


Telcordia Applied Research

Let's Move E911 Indoors

Michael J Loushine (Senior Scientist, Wireless Systems & Networks)
Clifford A Behrens (Senior Scientist & Director, Information Analysis)

There has been much recent activity within communications networking standards groups to define ways of discovering, representing and conveying outdoor locations of objects and people. Much of the motivation for this activity has come from the need to comply with FCC E911 mandates. Over the last couple of years, Telcordia has assembled a test bed to demonstrate the application of some of these standards in network communications infrastructure that enables recruitment and deployment of rapid emergency response teams. For example, this infrastructure provides situational awareness by integrating 3GPP IMS to provide a SIP/IP core network infrastructure and route 911 calls, OMA LOCSIP to convey terminal locations, OMA Presence SIMPLE to publish status notifications, IETF LoST to select a PSAP to answer 911 calls, GSMA RCS to publish locations/presence and perform user-based position determination of terminals, IETF HELD to perform network-based position determination of terminals, and WiMAX to transmit and receive voice, video, and other data. To date, our technology demonstration has considered only outdoors emergency scenarios; we now plan to extend our testbed by moving the emergency scenario indoors. An indoors extension of our scenario will require the adoption and integration of other location-based standards. Consequently, we are currently planning enhancements to our infrastructure that makes use of IETF PIDF-LO, OMA SUPL and OGC CityGML, i.e., new standards for representing and conveying indoor locations and their context. Our objective is to demonstrate the value these standards offer to network communications for managing indoor emergencies, to provide feedback to standards forums based on experiences from our demonstrations, and to expose opportunities to enhance and integrate them in ways critical for meeting the needs of decision-makers and emergency response teams.

Keywords: Emergency response, indoors location, location-based services


Geoweb Forum

Beyond the Check-In - Fragmentation and Consolidation in the Emerging Geoweb Industry

Peter Verkooijen (Founder)

Until the introduction of the iPhone in 2007, GIS and the geospatial web were the exclusive domain of academics and technologists in large organizations and governments. GPS-enabled smartphones, combined with services like Google Maps, opened up the field to existing communities of web entrepreneurs. They did with the new capabilities what they knew best and created free mobile location-based services to gather the large user bases required to generate advertising revenue.

New York-based Foursquare led this new wave in consumer LBS apps. The Geoweb Forum project was started in 2008 to put the trend into a broader context and connect the traditional New York media, advertising and retail industries with the geospatial world. The project defines the geoweb as the next phase of web after the much hyped web 2.0. Connecting the digital realm to physical location promises to have a much more tangible business impact than social media ever had.

With location check-in now ubiquitous, attention is shifting to platform builders like Xtify, Placecast, Retailigence, LOC-AID and SimpleGeo, companies that focus on solving practical business problems for marketers and retailers as well as application developers. Foursquare launched its own platform initiatives, as did Apple, Google and Microsoft. Managing the fragmentation in handsets, operating systems and carriers and making sense of location data streams is now front and center.

This session will present and discuss insights from panels at the Geoworld Summit, happening May 12th in New York, on fragmentation, consolidation and standardization in the emerging geoweb industry. Peter Verkooijen is organizer of the Geoweb Forum project and a veteran journalist for leading Dutch trade publications in IT, retail, advertising, supply chain, industrial management and healthcare.

Peter Verkooijen is organizer of the Geoweb Forum project and a veteran journalist for leading Dutch trade publications in IT, retail, advertising, supply chain, industrial management and healthcare.

Keywords: Geoweb, internet of things, commercialization, fragmentation, consolidation, check-in



Collaborative Development of Open Standards for Expanding GeoWeb to the Internet of Things

George Percivall (Chief Architect and Executive Director, Interoperability Program)

In a multi-vendor environment, development of the Internet of Things (IoT) will be limited without the emergence of open, consensus standards that enable collaboration. Such standards will define an infrastructure that raises the level of services and quality of information for the marketplace thereby providing more opportunities, particularly for the vendors that collaborate to define the standards. Collaborative development is key to consensus adoption and wide use of information technology standards.
Development of effective open standards is a balancing act. The standards need to be agile and adaptive to the rapidly changing developments in the marketplace. The standards also need to have a sound engineering foundation and respect relevant aspects of the existing technology base. The use of open standards to connect components, applications, and content – allowing a "white box" view on the components' functionality and interfaces without revealing implementation details " fulfills the industry requirement for protection of intellectual property and the user requirement for transparency.
The COM.Geo Workshop on "Expanding GeoWeb to an Internet of Things" is an excellent opportunity to discuss how organizations can increase their business based on quality location information in the Internet of Things. Quality information in a multi-vendor environment can only be obtained using standards. An industry-based consortium is needed to establish effective standards for information sharing about location in the Internet of Things. The Open Geospatial Consortium (OGC) has a proven process for industry-wide collaborative development of efficient standards for spatial and location information.
The mission of OGC is to serve as a global forum for the development and promotion of open standards and techniques in the area of geoprocessing and related information technologies. The OGC has 410+ members - geospatial technology software vendors, systems integrators, government agencies and universities -participating in the consensus standards development and maintenance process. Through its Specification Program, Interoperability Program, and Marketing and Communications Program, the OGC develops, releases and promotes open standards for spatial processing. Technology and content providers collaborate in the OGC because they recognize that lack of interoperability is a bottleneck that slows market expansion. They know that interoperability enabled by open standards positions them to both compete more effectively in the marketplace and to seek new market opportunities.
The OGC recommends the following steps for advancing the GeoWeb to an IoT-based marketplace:
• Definition of a standards-based "GeoWeb meets IoT" framework to spur coordinated application development.
• Coordination of standards for location in IoT with other relevant standards development organizations.
• Discussions of the framework in the OGC Specification Working Groups to identify if additional standards are needed.
• Conduct an Embedded Mobile Ecosystem Testbed using the OGC Interoperability Program approach.

Keywords: Geoweb; Internet of Things; collaborative development; standards; OGC