Quantcast
Channel: Questions in topic: "distributed-search"
Viewing all 180 articles
Browse latest View live

How to configure the Qualys App for Splunk Enterprise for Kb lookup file in a distributed search environment?

$
0
0
Had few questions regarding this app, can anyone please help? 1. In a distributed envt, I have installed this app on the forwarder. The index exists on the indexer and I'm able to see the data in the index on the search head when I search for `index=qualys`, but the lookup file qualys_kb lies on the forwarder, so I'm unable to see the lookup data on the search head. What to do in this case?? 2. Should we install the app on both Forwarder and Search head in this case? But i think it'll duplicate the indexed events, correct me if I'm wrong. 3. And in case ans to above is true, then how do I disable the script for detection on the search head and only enable the kb populator script? Only enabling the kb populator script under Data inputs-> Scripts in search head isn't updating the lookup file on the search head. Any pointers to the same are welcome. Thanks Rahul

Where to install apps in a distributed environment?

$
0
0
We have a distributed environment of one search head, one indexer and one deployment server + license master. I'm working on resolving CPU utilization issues right now related to too many scheduled searches running during the day and towards that end, I'm trying to prune extraneous applications. I've noticed that I have a number of applications installed on my Indexer as well as my Search Head and I'm concerned that they are causing scheduled searches to be executed extraneously. On which of those servers do I need to install each application? Both Search Head and Indexer or only the Search Head?

Where do we install Splunk Apps (ex: Palo Alto Networks App for Splunk) in a distributed search environment?

$
0
0
In our Splunk environment we have two data centers with one indexer each and one heavy forwarder each, and then we have one distributed search head. My lab environment is my home where I install and test Splunk apps. Since my home/lab box is collapsed, that is to say, the indexer, forwarder, and search head are all one box, it is obvious where I install the apps. However, in our enterprise/production environment, this is far less obvious. One app in particular that we want to run is the Palo Alto Networks App for Splunk 5.0.0. It works fine in the lab, however, we are not sure where to install it in our distributed environment? The Search Head, the indexers, the forwarders, all five boxes, we just aren't sure. Any guidance on this would be appreciated.

Can other users verify if this is the proper procedure to update TAs in a distributed environment?

$
0
0
I would appreciate if the following procedure could be verified. I am planning to do the following when updating TAs: 1. Make a backup copy of the TA folder (Splunk_TA_cisco-asa for example) located in `/opt/splunk/etc/deployment-apps/ or /opt/splunk/etc/master-apps` 2. Copy the folder containing the updated version of the TA into `/opt/splunk/etc/deployment-apps/` or `/opt/splunk/etc/master-apps`, overwriting the contents of the current version. 3. Issue either the `./splunk reload deploy-server` or `./splunk apply cluster-bundle` depending on whether it is a deployment-app or master-app. If/when changes are made to the "local" folder of an app, it is currently being made on the distribution server, not the client. That said, is there a need for me to "excludeFromUpdate = $app_root$/local"? Thank you.

Why are reachable and searchable indexers not showing indexed data when searching in a distributed mode?

$
0
0
Hi, In a distributed mode with 1 search head and 4 indexers, when making a search through the search head, 2 of the for 4 indexers are not showing indexed data except internal logs of other Splunk infrastructure elements. The indexer is reachable, searchable and indexing data of different equipment. Anyone got an idea? (version 6.3.1) Thanks !

How many resources do I commit to a master node in distributed multisite indexer clustering deployment?

$
0
0
I am in the process of setting up a distributed clustered deployment that spans 3 different sites. The deployment will live on virtual environment using VMware vSphere. I have determined the resource requirements for my indexers and search heads. I am having a little trouble figuring appropriate resources for the master node. Please help. Thanks.

How to sync apps and configurations without a deployment server in my distributed search environment?

$
0
0
Hi! I have 4 Splunk servers (one per each geographical location), each with combined Indexer and Search Head roles (yes, I know that it's not good, but I'm limited with number of servers), and each server gets its own portion of events. Servers are united as search peers, so whatever search head you use, all data is searchable. However, all configurations are done manually on each server: index creation, listeners, apps and so on. I can't use indexer clustering because it doubles (or even quadruples) required storage and consumes bandwidth of links between locations. And currently I cannot use Deployment server, because it requires a separate machine (I'm going to have about 2000 forwarders). Are there any tricks on how to sync at least some configuration in this scenario? I was thinking about a shell script, which will do regular sync and server restart/reload, but I'm sure there are some other (better) ideas in this community.

Why am I getting "Error while sending public key to search peer: Connection reset by peer"?

$
0
0
I have a Splunk Server on Ubuntu and a Splunkforwarder on Ubuntu too. I want to add splunkforwarder to distributed search on Splunk server, but when try to add it, the error below is generated: Encountered the following error while trying to save: In handler 'distsearch-peer': Error while sending public key to search peer: Connection reset by peer How do can I fix this problem?

How do I configure the Blueliv app to work with bundle installations in a distributed search environment?

$
0
0
Hi, We run a distributed Splunk platform where the search heads have a bundle location for apps. It seems that this app does not support this configuration and the app location is hard coded into the .pyo files as $SPLUNK_HOME. I did change the location in the .py scripts, but this made no difference. Is there a way in changing this to work with bundle installations without trying to reverse engineer the compiled files? OS: RHEL Linux 64bit Splunk: 6.3.X 64bit Blueliv: 2.0.2 Cheers Steve

How to delete indexes in an indexer clustering environment?

$
0
0
Hi, I need to delete some indexes that I created when testing our new distributed Splunk deployment. Is it as easy as: 1. Remove the indexes I want to delete from the `/opt/splunk/etc/master-apps/_cluster/local/indexes.conf` file on the master node 2. Apply the configuration bundle from the Web UI on the master node: *Settings > Index Clustering > Edit > Distribute Configuration Bundle* 3. Remove the index directories on all the indexers `rm -rf /path/to/index/directory` I didn't find any documentation about this specific use case. Hope you guys can help me out.

How to implement a test environment for our distributed search deployment?

$
0
0
Hello splunkers, We are planning to implement test environment for our distributed environment. Can anyone provide me a clear documentation to follow? Regards.

How to copy configurations from the search head, heavy forwarder, and indexer cluster in one environment to a new environment?

$
0
0
I have a distributed `6.2.3` setup with a single `Search head`, an `Indexer cluster` and a single `Heavy Forwarder`. This environment is pretty "dirty" (it's in a lab for testing so it gets abused) so I have built new 6.2.3 (have to stay on this version) servers and want to copy the configuration from the dirty environment to the new environment. Basically I want server settings, licensing, authentication, clustering, distributed search... I don't care about apps and add-ons, indexes, saved searches, etc. I recognize in copying some of the files that edits may be necessary, for example, IPs and hostnames will be different. Is this feasible, reasonable, or am I going about this wrong? If this is the way to go, I'm not sure what files need to be copied... don't want all of `$SPLUNK_HOME/etc`. Your feedback and assistance is appreciated. Thanks.

How to set up Splunk to monitor logs and configure distributed search across 4 different development environments (Dev > Tst > Stg > Prod) in AWS

$
0
0
We have four AWS accounts to host different development environments: Dev -> Tst -> Stg -> Prod Requirements: We want to set up Splunk to index/monitor logs across all accounts and provide a single endpoint for searching using GUI. We are thinking about doing the following: - setting up dedicated indexer for each account (which individual forwarders communicate to) - Configure distributed search (search head instance) to search across all indexers to provide an aggregated view across all accounts. - Each indexers will be set up with the same internal DNS name across all accounts. In this case we can bake the forwarder with the same configuration into AMI and promote that AMI across accounts. - As I understand, the search head needs to have network access to the individual indexers (search peers). We're thinking of using VPC peering. We do not need to worry about cross region connectivity as we will be using only one region across multiple accounts. Can someone from Splunk please provide some inputs to this proposed design and comment on if this is the endorsed way of using Splunk with AWS? Thanks,

How to install the Splunk App for Check Point and Splunk Add-on for Check Point OPSEC LEA in a distributed search environment?

$
0
0
Hi Experts, We are looking to use the Splunk app for Check Point. Installation steps are confusing on Splunk's point of view. Our Splunk setup is distributed search with 2 search heads and 2 indexers. I have installed Splunk app for Check Point on the Search head, but now I am confused where to install "splunk-add-on-for-check-point-opsec-lea_3" Is it only on the Splunk forwarder or on the indexer also?

Multisite Distributed Search: Why am I getting search head error "Encountered an error deserializing SearchResultsInfo from Results Stream header"?

$
0
0
Hi, In a multisite distributed search environment with 1 search head and 4 indexers, it seems that the Search Head has difficulties to retrieve answers from the different indexers. Found this error in the search result of the search head : ERROR SearchResultParserExecutor - Encountered an error deserializing SearchResultsInfo from ResultsStream header. Anybody knows if it linked and how to fix it? Splunk Entreprise 6.3.1

On what instances do I install the RFC5424 Syslog add-on in a distributed search head clustering environment?

$
0
0
I've been spinning my wheels for the past couple days trying to figure this out... I've read documentation and checked out Splunk Answers and things that should be working don't seem to be working. I am trying to install this RFC5424 Syslog add-on to process syslog data being handled by a Kiwi Syslog server with a Universal Forwarder installed: https://splunkbase.splunk.com/app/978/ The reason I'm installing it is because the default sourcetype for "syslog" in Splunk seems to be RFC3164, but I need RFC5424 parsing/indexing. Our environment looks like this: Universal Forwarder > Heavy Forwarder > Indexer We have a master indexer and several peer indexers. We also have a search head cluster of three search heads. I put the add-on on a deployment server and pushed it out to the universal forwarders. The add-on is installed and is pulling in the data configured in the inputs.conf file. I searched the data being indexed in Splunk and saw that it was there, but that the fields weren't properly selected. I then went on every server from the universal forwarder to the search heads, dropping the add-on in the C:\Splunk\etc\apps folder and restarting the service. No dice. I installed the add-on through our deploy server and pushed it out to the search heads. Restarted. Still didn't index properly. Spent some time reading this doc: http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F And went to the heavy forwarders and indexers and manually loaded all of the add-on's .conf files (including lookup and metadata information) to this folder (this root folder, I put all the .conf files in their appropriate subdirectories): `C:\Splunk\etc\system` Restarted the services and still nothing... when I look at the data in Splunk, this is what I'm seeing... but it's not broken down the way syslog data should be with host, priority, etc, etc. Am I missing something? See pic attached. I guess what I'm assuming is supposed to happen is these selected fields will be more representative of the data... like priority, hostname, message text, etc.

How to install the Cisco Networks App and Add-on in a distributed search environment?

$
0
0
We are deploying a distributed Splunk instance. I install the TA-cisco_ios in my Indexers. Is there any other place need to be added? Have 1 Search Head, 2 Indexers and 2 Syslogs collectors. The syslog collector is already configured in the outputs.conf to add the `sourcetype = cisco:ios` for every message coming in a specific path. Did I need to add the TA also in the syslog collectors as well? The only installation in the search head will be the Cisco IOS app?

Are performance improvements by splitting a single Splunk instance into one search head and one indexer on their own servers?

$
0
0
Currently, I have a combined instance where the search head and indexer are sitting on the same box. The documentation does indicate that performance improvements will be made by splitting that centralized deployment into one search head and one indexer each on their own servers. (Look at the Summary of Performance Recommendations document) Is that the case? Or do you need to go to one search head with at least two different indexers? Thanks.

Why do I often see error "Asynchronous bundle replication to 2 peer(s) succeeded; however it took too long..." and how do I fix this?

$
0
0
I see these bundle replication errors very often. Is there a solution or workaround? 02-15-2016 22:56:38.636 -0800 ERROR DistributedBundleReplicationManager - Unexpected problem while uploading bundle: Unknown write error 02-15-2016 22:56:38.636 -0800 ERROR DistributedBundleReplicationManager - Unable to upload bundle to peer named xyz with uri=https://xx.xx.xx.xx:8089. 02-15-2016 22:56:38.637 -0800 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 2 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=37649, tar_elapsed_ms=23682, bundle_file_size=939470KB, replication_id=1455605760, replication_reason="async replication allowed"

How to integrate a multisite indexer cluster with remote standalone Splunk installations?

$
0
0
Dear Splunkers, We have a multisite Indexer Cluster in our datacenter and some remote locations with local standalone Splunk installations. Now we want to connect our search heads of the datacenters to those remote Splunk installations. It's important for us to use Splunk Search Group of search peers because we just want to search those remote Splunk installations when needed to save bandwidth. I saw on distsearch documentation that we cannot use cluster and search group functions at the same time. Does anyone know how can I integrate those two Splunk installations? Thanks!
Viewing all 180 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>