9+ What Does DDS Stand For? (Explained!)


9+ What Does DDS Stand For? (Explained!)

A standard abbreviation in computing and knowledge administration, this acronym usually represents Knowledge Distribution Service. It’s a middleware protocol and API normal for real-time knowledge alternate, significantly fitted to high-performance, scalable, and reliable techniques. An instance utility contains coordinating elements inside autonomous autos or managing complicated industrial management techniques the place low latency and dependable knowledge supply are essential.

The importance of this expertise lies in its capacity to facilitate seamless communication between numerous distributed parts. Its structure helps a publish-subscribe mannequin, enabling environment friendly and versatile knowledge dissemination. Traditionally, it advanced to deal with limitations in conventional client-server architectures when coping with the calls for of real-time and embedded techniques. This development provides enhancements in efficiency, scalability, and resilience for interconnected functions.

Understanding this basis is important for delving into matters comparable to DDS safety implementations, its position within the Industrial Web of Issues (IIoT), and comparisons with various middleware options like message queues or shared reminiscence approaches. This information additionally gives a context for analyzing its influence on rising applied sciences in robotics and autonomous techniques.

1. Actual-time knowledge alternate

Actual-time knowledge alternate is a cornerstone functionality facilitated by Knowledge Distribution Service (DDS). The structure, by design, prioritizes minimal latency and predictable supply occasions, making it well-suited for techniques the place well timed data is paramount. The alternate of knowledge should happen inside strict temporal bounds to make sure the general system operates appropriately. This attribute will not be merely an elective characteristic however an integral a part of the protocol’s specification and implementation. As a direct consequence of the protocol’s concentrate on velocity, it’s a elementary element for functions requiring deterministic conduct.

The significance is highlighted in domains comparable to autonomous autos, the place split-second selections primarily based on sensor knowledge are essential for security. Likewise, in monetary buying and selling platforms, real-time market knowledge feeds are important for executing trades and managing threat. In industrial automation, speedy suggestions loops allow exact management of producing processes, minimizing errors and maximizing effectivity. DDS achieves real-time efficiency by way of mechanisms like optimized knowledge serialization, environment friendly transport protocols, and configurable High quality of Service (QoS) insurance policies that enable prioritization of essential knowledge streams.

In abstract, the inherent real-time knowledge alternate functionality of DDS is not only a fascinating attribute, however a core practical requirement for a lot of of its goal functions. This locations stringent calls for on the underlying implementation and community infrastructure. Overcoming challenges associated to community congestion, knowledge serialization overhead, and processor load are essential for realizing the total potential of DDS in demanding real-time techniques. This efficiency side ties on to its worth in constructing sturdy, responsive, and dependable distributed functions, in addition to connecting it to broader matters comparable to distributed databases and networked techniques.

2. Publish-subscribe structure

The publish-subscribe structure is a defining attribute of Knowledge Distribution Service (DDS) and central to understanding its capabilities. This communication paradigm permits a decoupled interplay mannequin, the place knowledge producers (publishers) transmit data with out direct data of the shoppers (subscribers), and vice versa. This decoupling enhances system flexibility, scalability, and resilience.

  • Decoupling of Publishers and Subscribers

    The separation of publishers and subscribers reduces dependencies inside the system. Publishers are chargeable for producing knowledge and sending it to DDS, with no need to know which functions are fascinated about that knowledge. Subscribers categorical their curiosity in particular knowledge matters, and DDS ensures that they obtain related updates. This mannequin facilitates unbiased growth and deployment of system elements. An instance is a sensor community the place particular person sensors (publishers) transmit knowledge to a central processing unit (subscriber) with out specific connections. Modifications to the sensors don’t necessitate modifications to the processing unit, highlighting the inherent flexibility.

  • Matter-Primarily based Knowledge Filtering

    DDS makes use of a topic-based system for knowledge filtering and distribution. Publishers ship knowledge related to a particular matter, and subscribers register their curiosity in a number of matters. The middleware then ensures that subscribers solely obtain knowledge related to their registered matters. This method reduces community site visitors and processing overhead, as subscribers aren’t burdened with irrelevant data. For instance, in an autonomous car, separate matters would possibly exist for lidar knowledge, digicam pictures, and GPS coordinates. A navigation module would subscribe solely to the GPS matter, receiving solely the required location data.

  • High quality of Service (QoS) Insurance policies

    The publish-subscribe mannequin in DDS is augmented by a complete set of High quality of Service (QoS) insurance policies. These insurance policies govern numerous points of knowledge supply, together with reliability, sturdiness, latency, and useful resource allocation. QoS insurance policies enable builders to fine-tune the conduct of the system to satisfy particular utility necessities. For instance, a real-time management utility would possibly prioritize low latency and excessive reliability, whereas a knowledge logging utility would possibly prioritize sturdiness to make sure no knowledge is misplaced. These insurance policies could be configured at each the writer and subscriber ranges, offering granular management over knowledge supply traits.

  • Dynamic Discovery and Scalability

    DDS employs a dynamic discovery mechanism that enables publishers and subscribers to robotically uncover one another with out requiring pre-configuration or centralized registries. This characteristic permits the system to scale simply and adapt to adjustments within the community topology. As new publishers or subscribers be part of the community, they robotically announce their presence, and DDS handles the routing of knowledge accordingly. This attribute is necessary in massive, distributed techniques the place the variety of nodes could range over time. In a cloud-based knowledge processing platform, DDS can dynamically adapt to altering workloads by including or eradicating compute nodes with out disrupting the general system.

These points of the publish-subscribe structure inside DDS are important for creating scalable, versatile, and sturdy distributed techniques. The decoupling, topic-based filtering, QoS insurance policies, and dynamic discovery mechanisms contribute to its suitability for a variety of functions, together with real-time management, knowledge acquisition, and distributed simulation. This structure permits the system to deal with complicated knowledge flows and adapt to altering necessities. By abstracting away the main points of community communication, DDS simplifies the event of distributed functions and permits builders to concentrate on the core logic of their functions.

3. Decentralized communication

Decentralized communication is a foundational precept underpinning Knowledge Distribution Service (DDS), immediately influencing its structure, efficiency, and suitability for distributed techniques. This method deviates from conventional client-server fashions, fostering a extra resilient and scalable communication paradigm.

  • Elimination of Single Factors of Failure

    Decentralized communication inherent in DDS mitigates the chance related to single factors of failure. Not like centralized techniques the place a server failure can halt your complete community, DDS distributes communication tasks throughout a number of nodes. If one node fails, the remaining nodes can proceed to speak, sustaining system performance. Autonomous autos exemplify this; failure of 1 sensor knowledge stream does not cease knowledge alternate, permitting techniques to compensate.

  • Peer-to-Peer Communication Mannequin

    DDS leverages a peer-to-peer communication mannequin, enabling direct interactions between knowledge producers and shoppers with out intermediaries. This reduces latency and improves efficiency in comparison with broker-based techniques the place messages should cross by way of a central server. For instance, a knowledge logging service can obtain knowledge immediately from distributed sensors, bypassing a central collector. Every system can entry the identical data because the others.

  • Distributed Knowledge Cache

    Every node in a DDS community maintains an area knowledge cache, enabling environment friendly entry to incessantly used knowledge. This distributed caching reduces community site visitors and improves response occasions, as nodes can retrieve knowledge from their native cache as an alternative of regularly querying a central server. This cache is beneficial in complicated industrial functions comparable to energy grids.

  • Fault Tolerance and Redundancy

    Decentralized communication contributes to the inherent fault tolerance and redundancy inside DDS. The system can tolerate the lack of nodes with out compromising general performance, as knowledge and communication tasks are distributed throughout a number of nodes. This redundancy will increase the system’s robustness and availability. This redundancy is a foundational side of its utilization in army functions.

These aspects of decentralized communication, integral to Knowledge Distribution Service (DDS), considerably improve system resilience, scalability, and efficiency. The absence of central dependencies reduces vulnerabilities and fosters a extra sturdy and adaptable distributed surroundings, making DDS a most well-liked selection for functions demanding excessive reliability and real-time knowledge alternate. The distributed nature immediately improves a system’s resilience to assaults and accidents. The inherent capacity to distribute caches makes DDS an necessary a part of many IoT networks.

4. Scalability and efficiency

Scalability and efficiency are intrinsic traits of Knowledge Distribution Service (DDS). The protocol’s design explicitly addresses the challenges of distributing knowledge in real-time throughout quite a few nodes, making it appropriate for functions requiring each excessive throughput and low latency. The architectural decisions, such because the publish-subscribe mannequin and decentralized communication, immediately contribute to its capacity to deal with massive knowledge volumes and scale horizontally. With out this inherent scalability and efficiency, it might be impractical to be used in functions like autonomous autos or large-scale industrial management techniques, the place responsiveness and the power to handle a rising variety of knowledge sources are essential. The sensible significance lies within the dependable and well timed supply of knowledge in complicated, dynamic environments.

The effectivity of DDS is additional enhanced by its High quality of Service (QoS) insurance policies, which permit builders to fine-tune knowledge supply traits in keeping with particular utility necessities. For example, in a simulation surroundings, a lot of simulated entities is likely to be producing knowledge concurrently. DDS, by way of its configurable QoS, can prioritize essential knowledge streams, making certain that important data is delivered with minimal latency. This management over knowledge stream is important for sustaining system stability and responsiveness underneath excessive load. Furthermore, DDS’s decentralized structure eliminates single factors of failure, contributing to improved system resilience and availability. The power to scale horizontally by including extra nodes with out considerably impacting efficiency is significant for dealing with growing knowledge volumes and person calls for.

In abstract, scalability and efficiency aren’t merely fascinating attributes however elementary elements of Knowledge Distribution Service. These capabilities are immediately linked to the protocol’s structure and have set. The protocol’s functionality to deal with huge knowledge streams and dynamic environments is essential for its utility in numerous fields, from robotics to aerospace. Challenges stay in optimizing DDS configurations for particular use instances and making certain interoperability throughout totally different DDS implementations. Nevertheless, the underlying rules of scalability and efficiency are important to its continued relevance within the evolving panorama of distributed techniques.

5. Interoperability normal

Knowledge Distribution Service (DDS) emphasizes interoperability as a core tenet. The specification is maintained by the Object Administration Group (OMG), making certain adherence to a standardized protocol throughout totally different vendor implementations. This adherence will not be merely a matter of compliance; it’s integral to the protocol’s operate in enabling seamless communication between heterogeneous techniques. The power of numerous DDS implementations to alternate knowledge reliably relies upon this interoperability normal. For instance, a system comprised of sensors from totally different producers can leverage DDS to combine sensor knowledge onto a unified platform, offered every sensor adheres to the DDS specification. With out this normal, integration efforts would require customized interfaces and translation layers, considerably growing complexity and price.

The sensible implications of this normal lengthen past easy knowledge alternate. It facilitates the creation of modular and extensible techniques. Organizations aren’t locked into particular vendor options and might select one of the best elements for his or her wants, figuring out that these elements will interoperate seamlessly. Moreover, it fosters innovation by encouraging competitors amongst distributors. This encourages the event of extra superior and cost-effective options. An instance of the advantages could be in robotics the place numerous arms from numerous producers should work in live performance underneath a shared management system. Utilizing the protocol ensures seamless system communication. A normal enhances ease of integrating, upgrading and securing numerous system elements.

In conclusion, the dedication to being an interoperability normal will not be merely a element, it’s a elementary element of its worth proposition. It permits seamless integration, facilitates modular system design, and promotes innovation. Whereas challenges stay in making certain constant adherence to the usual throughout all implementations and in addressing evolving safety threats, the foundational dedication to interoperability stays a core energy of the expertise. This immediately impacts its relevance in fashionable distributed techniques.

6. High quality of Service (QoS)

High quality of Service (QoS) is an integral factor inside Knowledge Distribution Service (DDS), immediately influencing how knowledge is managed, prioritized, and delivered. The connection between QoS and this normal is causal: DDS employs QoS insurance policies to make sure real-time knowledge supply necessities are met. These insurance policies govern numerous points of knowledge communication, together with reliability, sturdiness, latency, and useful resource allocation. The implementation of acceptable QoS settings permits builders to fine-tune DDS to optimize for particular utility wants. For instance, a safety-critical system would possibly prioritize reliability and low latency utilizing QoS insurance policies to ensure knowledge supply with minimal delay, whereas a monitoring utility would possibly prioritize sturdiness to make sure no knowledge loss, even throughout community outages. The absence of configurable QoS would render this protocol insufficient for a lot of real-time and embedded techniques, highlighting its significance as a foundational element.

The sensible significance of understanding the connection between QoS and DDS is obvious in numerous functions. In autonomous autos, totally different knowledge streams have various criticality ranges. Sensor knowledge used for rapid collision avoidance requires stringent reliability and minimal latency, achieved by way of devoted QoS insurance policies. In distinction, diagnostic knowledge could tolerate greater latency and decrease reliability. These insurance policies make sure that essential data is delivered promptly and reliably, enhancing security and operational effectivity. In industrial management techniques, DDS and its related QoS insurance policies are used to handle the stream of knowledge between sensors, actuators, and controllers, making certain exact and well timed management of commercial processes. Deciding on acceptable QoS insurance policies relies on an intensive evaluation of utility necessities, contemplating components comparable to community bandwidth, knowledge quantity, and acceptable latency.

Concluding, High quality of Service (QoS) will not be an elective characteristic however an indispensable a part of what defines the Knowledge Distribution Service normal. It gives the mechanisms to manage knowledge supply traits, enabling DDS to adapt to the various necessities of real-time and embedded techniques. Whereas challenges exist in configuring and managing complicated QoS insurance policies, significantly in large-scale distributed techniques, the basic position of QoS in enabling environment friendly and dependable knowledge distribution stays essential. This immediately hyperlinks to a wider understanding of networked and distributed techniques.

7. Knowledge-centric design

Knowledge-centric design will not be merely a philosophy however a core architectural factor inside Knowledge Distribution Service (DDS). The connection between these two ideas is causal: DDS operates in keeping with a data-centric mannequin, shaping how knowledge is outlined, managed, and exchanged throughout distributed techniques. This design prioritizes the construction and traits of knowledge itself fairly than focusing solely on the communication endpoints. The consequence of this method is a system the place knowledge shoppers categorical their wants primarily based on knowledge properties, and the infrastructure ensures the supply of knowledge matching these necessities. The success of DDS in real-time techniques hinges on the effectiveness of this data-centric method. This enables complicated techniques to work together primarily based on knowledge wants fairly than level to level communication.

The sensible significance of data-centric design is illustrated in complicated distributed functions comparable to aerospace techniques. In these techniques, quite a few sensors, processors, and actuators alternate knowledge repeatedly. An information-centric structure permits every element to concentrate on the particular knowledge it requires, whatever the supply or location of that knowledge. For example, a flight management system would possibly require exact altitude knowledge, specifying this requirement by way of knowledge filters outlined inside DDS. The system ensures supply of altitude knowledge assembly particular accuracy and latency standards, no matter which sensor is offering the info. This contrasts with conventional approaches the place point-to-point connections are established and knowledge codecs are tightly coupled, creating rigidity and complexity. This makes integrating new elements a lot simpler.

In abstract, the data-centric design will not be merely a design selection for DDS; it’s an integral side of its operational mannequin. It permits decoupling of knowledge producers and shoppers, enhances system flexibility, and facilitates environment friendly knowledge administration in complicated distributed techniques. Though challenges exist in successfully defining knowledge fashions and managing knowledge consistency throughout massive networks, the basic benefits of data-centricity stay central to DDS’s utility and its continued relevance in fashionable distributed computing environments. This design is chargeable for excessive scalability and ease of use in complicated conditions.

8. Low latency

Low latency is a essential efficiency attribute intrinsically linked to the structure and performance of Knowledge Distribution Service (DDS). The protocol is designed to reduce the delay in knowledge supply, making it appropriate for real-time techniques the place well timed data is paramount. The connection between DDS and minimal delay is causal: the protocol incorporates architectural options and configuration choices particularly geared toward reaching low-latency communication. This isn’t merely a fascinating attribute; it’s a elementary requirement for a lot of DDS use instances. For instance, in autonomous driving techniques, selections primarily based on sensor knowledge should be made in milliseconds to make sure security and responsiveness. With out low latency, such functions could be infeasible. The architectural implementation has been purposefully created for the well timed passing of data.

A number of points of DDS contribute to its low-latency capabilities. The publish-subscribe mannequin permits knowledge to be delivered on to shoppers with out passing by way of intermediaries, decreasing communication overhead. High quality of Service (QoS) insurance policies present fine-grained management over knowledge supply traits, enabling builders to prioritize low latency for essential knowledge streams. The decentralized structure eliminates single factors of failure and reduces community congestion, additional minimizing delays. For instance, in monetary buying and selling platforms, low latency is important for executing trades and managing threat successfully. The power of DDS to ship market knowledge with minimal delay permits merchants to react shortly to altering market situations. This low latency is immediately chargeable for the dependable techniques the protocol seeks to allow.

In conclusion, low latency will not be an elective characteristic however a vital part of Knowledge Distribution Service. The protocol’s structure and QoS insurance policies are designed to reduce delays in knowledge supply. Whereas challenges exist in optimizing DDS configurations for particular functions and making certain low latency in complicated community environments, the basic significance of minimal delay stays central to its worth proposition and its continued relevance in demanding real-time techniques. The low latency normal should be met for techniques to depend on this protocol. This connects to a wider understanding of communication and the influence on time-dependent techniques.

9. Resilient communication

Resilient communication is an inherent attribute of Knowledge Distribution Service (DDS) and is basically intertwined with its structure and operational rules. The affiliation between sturdy communication and this data-centric middleware is causal; the design of DDS explicitly incorporates mechanisms to make sure dependable knowledge alternate even within the face of community disruptions, node failures, or knowledge loss. This resilience will not be an ancillary characteristic however a core requirement for a lot of functions that depend on DDS, significantly in essential infrastructure and real-time management techniques. For instance, in an influence grid, the communication community should stand up to element failures to take care of grid stability. DDS facilitates steady knowledge dissemination by way of its distributed structure and fault-tolerance options. With out this degree of resilience, many complicated, distributed techniques could be weak to disruptions, probably resulting in catastrophic penalties.

The publish-subscribe paradigm, mixed with configurable High quality of Service (QoS) insurance policies, performs a major position in reaching communication robustness. The decoupling of knowledge producers and shoppers reduces dependencies and minimizes the influence of particular person node failures. QoS insurance policies enable builders to specify reliability necessities, making certain that essential knowledge is delivered even underneath adversarial community situations. For instance, utilizing these insurance policies, misplaced knowledge packets could be retransmitted, various knowledge sources could be robotically chosen, or knowledge could be continued in distributed caches. In an autonomous car, the place sensor knowledge is essential for secure navigation, QoS insurance policies can assure the dependable supply of sensor data, even when some sensors expertise short-term communication loss. This enables the car to take care of consciousness of its environment and proceed working safely. This redundancy makes DDS a good selection for any system that operates in hazardous situations or environments.

In abstract, resilient communication will not be merely a fascinating attribute; it’s a foundational element. The distributed structure, the publish-subscribe mannequin, and the versatile QoS insurance policies work in live performance to supply sturdy knowledge supply in demanding environments. Whereas challenges stay in configuring DDS for optimum resilience in complicated community topologies and in mitigating the influence of malicious assaults, the dedication to dependable communication stays central to the long-term worth and continued relevance of DDS in an more and more interconnected world. This immediately hyperlinks to a wider understanding of distributed techniques, the place resilience is paramount for making certain operational continuity and mitigating dangers. The power to proceed operation with decreased capability is a characteristic of a effectively applied DDS system.

Steadily Requested Questions About Knowledge Distribution Service

This part addresses frequent inquiries relating to the performance and functions of Knowledge Distribution Service (DDS), offering concise explanations and insights into its key options.

Query 1: What’s the main goal of Knowledge Distribution Service?

Its main goal is to facilitate real-time knowledge alternate between distributed elements inside a system. It gives a standardized middleware resolution for functions requiring excessive efficiency, scalability, and reliability, significantly in environments the place low latency and deterministic conduct are essential.

Query 2: How does it differ from conventional message queue techniques?

It differs from conventional message queue techniques in its data-centric method and help for High quality of Service (QoS) insurance policies. Not like message queues, which primarily concentrate on message supply, DDS emphasizes the traits of the info being exchanged and permits builders to fine-tune knowledge supply primarily based on particular utility necessities.

Query 3: What are the important thing advantages of utilizing a publish-subscribe structure in its surroundings?

The publish-subscribe structure promotes decoupling between knowledge producers and shoppers, enhancing system flexibility, scalability, and resilience. Parts can publish knowledge with no need to know which functions are fascinated about it, and functions can subscribe to particular knowledge matters with no need to know the supply of the info. This reduces dependencies and simplifies system integration.

Query 4: What position does High quality of Service play within the operation of it?

High quality of Service insurance policies are integral to the operation of this normal, enabling builders to manage numerous points of knowledge supply, together with reliability, sturdiness, latency, and useful resource allocation. These insurance policies enable the usual to adapt to numerous utility necessities, making certain that essential knowledge is delivered with acceptable traits.

Query 5: How does Knowledge Distribution Service obtain low latency communication?

This normal achieves low latency communication by way of a number of architectural options, together with a peer-to-peer communication mannequin, a distributed knowledge cache, and configurable QoS insurance policies. These options reduce overhead and scale back the delay in knowledge supply, making it appropriate for real-time techniques.

Query 6: What are some typical use instances for Knowledge Distribution Service?

Typical use instances embrace autonomous autos, industrial management techniques, monetary buying and selling platforms, aerospace techniques, and robotics. These functions require real-time knowledge alternate, excessive reliability, and scalability, all of that are offered by the usual.

These FAQs spotlight the core functionalities and advantages, emphasizing its position in enabling sturdy and environment friendly real-time knowledge alternate in distributed techniques. The small print contained beforehand within the article ought to present a transparent understanding.

The following part will delve into sensible issues for implementing it in real-world functions.

Implementation Suggestions for Knowledge Distribution Service

Correct deployment requires cautious consideration of a number of components to make sure optimum efficiency and reliability.

Tip 1: Outline Clear Knowledge Fashions: Set up sturdy knowledge fashions utilizing Interface Definition Language (IDL) to make sure knowledge consistency and interoperability throughout system elements. For instance, clearly outline the construction and kinds of sensor knowledge in an autonomous car to facilitate seamless communication between sensors and processing models.

Tip 2: Choose Acceptable High quality of Service (QoS) Insurance policies: Select QoS insurance policies primarily based on utility necessities, prioritizing components comparable to reliability, latency, and sturdiness. For essential knowledge streams, guarantee dependable supply with minimal delay by configuring acceptable QoS settings. Totally different knowledge flows may have distinctive necessities.

Tip 3: Optimize Knowledge Serialization: Make use of environment friendly knowledge serialization strategies to reduce overhead and scale back latency. Think about using compact knowledge codecs and environment friendly serialization libraries to enhance efficiency, particularly in high-throughput environments.

Tip 4: Monitor Community Efficiency: Repeatedly monitor community efficiency to determine and tackle potential bottlenecks or points. Use community monitoring instruments to trace latency, bandwidth utilization, and packet loss, making certain optimum communication throughout the community. This may embrace alerts for when community latency goes above a suitable degree.

Tip 5: Implement Strong Safety Measures: Implement sturdy safety measures, together with authentication, authorization, and encryption, to guard knowledge from unauthorized entry and tampering. Make the most of DDS Safety to implement entry management insurance policies and guarantee knowledge confidentiality and integrity. At all times observe the precept of least privilege when organising accounts.

Tip 6: Design for Scalability: Architect the system to scale horizontally by including extra nodes with out considerably impacting efficiency. Make the most of the dynamic discovery mechanism to robotically detect new nodes and modify knowledge routing accordingly. Central to this can be a effectively outlined preliminary structure.

Tip 7: Perceive Knowledge Sturdiness implications: Take particular care to grasp the implications of various Knowledge Sturdiness settings. These settings may cause sudden behaviors if not correctly configured.

Implementing the following tips will maximize effectivity, safety, and scalability. Following these pointers is essential for profitable integration into complicated, distributed techniques.

The following section gives concluding remarks and recaps what has been coated.

Conclusion

This exploration has totally examined “what’s dds stand for,” revealing Knowledge Distribution Service as a essential middleware resolution for real-time knowledge alternate. The examination has established its architectural foundations, emphasizing key traits comparable to its publish-subscribe mannequin, decentralized communication, High quality of Service insurance policies, and dedication to interoperability. These points collectively allow the environment friendly and dependable dissemination of data in demanding distributed techniques.

The offered data ought to encourage a deeper investigation into its potential functions. Understanding its capabilities is essential for engineers and designers designing next-generation techniques requiring deterministic knowledge supply and sturdy efficiency. Continued growth and adoption of DDS are important for addressing the evolving challenges of real-time knowledge administration in an more and more interconnected world.