For several decades, network interfaces have been shifting from the copy semantics and system call interface of sockets towards direct user-level access to network hardware and the shared-memory semantics of remote direct memory access (RDMA). Besides avoiding copy overhead, these novel interfaces allow data transfer without requiring control flow transfer, and possibly a context switch, on the receiver side. However, many of today's distributed systems use RDMA only to implement message exchange (e.g., MPI and our libRIPC) and still couple data transfer with control flow transfer. Pilaf, a key value-store by Mitchell et al., employs RDMA to directly read values from the server's memory. However, it still relies on synchronous messages for write operations. In contrast, in our research we enable clients to perform write operations to a remote key-value store using remote direct memory access. As already pointed out by the authors of Pilaf, the primary challenges for also handling writes via RDMA are: (1) remote memory allocation and (2) synchronization between concurrent remote writes from clients and local writes by the server. We have found solutions for both of these problems: After an initial allocation of a memory region, clients can insert new entries into the KV-store and overwrite or delete existing entries without involving the server's CPU. We propose to carefully combine region-based memory allocation with garbage-collection to solve remote memory allocation. Clients initially request a dedicated memory region from the server. Then, they write new entries only to that region. For all operations, clients write concurrently only to the hash table. Once a client fills up a region or disconnects (e.g., it fails), the server garbage collects the region, moves alive data to a server-owned region, and then adds the client's region to the pool of free regions. We employ a hash table with linear probing and lock-free updates. As clients will perform concurrent operations on the hash table, we have to cope with synchronization in a distributed setting where remote processes can fail at any time, unnoticed by both server and other clients. Thus, explicit locking would require timeouts. Instead, we decided to use lock-free operations based on atomic compare-and-swap over RDMA. Thereby, we do not have to cope with timeouts and further save the network round-trips required by explicit locking operations. We are currently evaluating a prototype of our design based on RDMA over InfiniBand, which shows promising early results.