Quantcast
Channel: Questions in topic: "indexes.conf"
Viewing all 236 articles
Browse latest View live

How to restrict access to one certain index without changing all the other roles?

$
0
0
The use case seems simple enough: **Lets say we have index `sensitive_data` that contains... sensitive data. We want to ensure that ONLY role `data-team` has access to this index. How to do this _reliably_?** ---- There are a few ways to achieve this, but none are reliable or convenient: * We could edit our `[default]` role in `authorize.conf` to include all the _other_ indexes, but exclude `sensitive_data` This is not a great approach for a few reasons: * It adds extra overhead: Adding a new index for _general_ consumption has an additional step (add the new index to the white list) * It is error prone: Everyone who admin user permissions needs to know _NOT_ to use `All internal indexes` because you could inadvertently expose the `sensitive_data`index. * We could of course set up scripting to deal with the extra step, and alerting to catch the occasional whoops. But those are just hacks. * We could use a search filter to to say `index!=sensitive_data` and add that to our `[default]` role (This approach is not recommended for some reason, I have not experimented with it personally. I am curious to know why, or what problems this could cause.) * We could index this sensitive data on a separate indexer. (Our forwarders would then be tied to that indexer specifically.) ---- Ideally, we could restrict access to an index such that _no other settings could override this restriction_, either inadvertently or otherwise, unless it was _clear_ to the admin making the change what the effect was. I found several other answers here that deal with this question but none are very current, so I thought it would be worthwhile to bring it up again. * http://answers.splunk.com/answers/170362/how-to-restrict-user-access-to-a-new-index.html * http://answers.splunk.com/answers/80812/what-is-the-precidence-for-excluding-an-index-from-a-role-using-the-gui.html * http://answers.splunk.com/answers/32940/restrict-index-access.html Does **anyone** know a better way of doing this? Is it Splunk's express intention that Splunk not contain _highly_ sensitive data where its exposure would amount to a critical breach? I'm sure there are Splunk customers that deal with PII, PCI, HIPPA, SOX extra, so how do **you guys** deal with this issue? Thanks in advance

Why is the cluster master of a multisite indexer cluster not showing new indexes I created and deployed to the peers?

$
0
0
I am setting up a multisite cluster with 2 indexers and 1 sh in each site, 2 sites (for now). I've created 2 extra indexes in `$SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.conf` and successfully deployed them to the peers. When on the peers I can see the config in slave-apps and the new folders under $SPLUNK_db/, but on the cluster master at the end point `/en-US/manager/system/clustering?tab=indexes`, I only see the indexes _audit and _internal. What did I miss? I deployed by hand editing the indexes.conf file on the master and then hitting deploy from the web ui on the master.

Where do I place indexes.conf in an indexer clustering environment?

$
0
0
Hi everyone, I have the indexes.conf, but I'm not sure where to place it inside my cluster environment. I thought I had to place it on the master, inside the folder master-app, but this folder is reserved only for the app to duplicate for each instance. So where can I place the file?

How to move fishbucket data to a warm bucket?

$
0
0
I wish to move all data which are in the fishbucket to warm bucket. Is there any command to do this?

Determining indexes.conf settings for all indexes combined

$
0
0
I've spent hours studying the documentation and articles outside of splunkbase about configuring indexing, and I'm still confused, and our indexing isn't working as expected. This shouldn't be that difficult. Hot + warm + cold usage is way beyond what I have configured for maxTotalDataSizeMB for the main index, but the volume for hot + warm is only at about 69% utilization, whereas the volume for cold is at 100%. Why did the cold volume fill up? I.e., why isn't cold going to frozen soon enough? I'm thinking now that it might be that I have only taken the main index into account. That's probably because it seems that most of the documentation and articles just talk about maxTotalDataSizeMB and other indexes.conf settings in reference to main. We have maxTotalDataSizeMB set to 160000, which is sufficiently low for main (hot+warm volume size is 100000, cold volume size is 100000). However, maxTotalDataSizeMB is set to the default value of 500000 for the other indexes (history, summary, etc.), which is way beyond the size of the two volumes combined. Don't I need to take those into account as well? That is, don't I need to keep the total of the maxTotalDataSizeMB values for all indexes below our total volume size for this to work? The splunk documentation isn't at all clear about this. I may file a support case for this, but I figured I'd try my luck on the forum first.

Indexer Cluster: Under default in indexes.conf, when I use paths /indx/hot and indx/cold and create a new index, why is /indx also populated?

$
0
0
Hey there... In my default section of my indexes.conf file (used for the internal spunk indexes), I have primary defined as `/indx/hot` and secondary `/indx/cold`. When I use these paths and create a new index, it not only populates `/indx/hot` and `/indx/cold`, but also `/indx`. Does anyone know why this would be? I am finding managing Splunk to be quite the task.

Can I, Should I, Change the default rotation from Warm To Cold Buckets

$
0
0
The default for rotation from warm to cold is 300. I am retaining about 1 years worth of data in all indexes and most of that data is kept in warm buckets I have about 13.22 TB of "homepath" data and 9.08 TB of "coldpath" data. If I change the default for warm to cold rotation from 300 to 150 I will move about 6.5 TB into cold storage. this will allow me to put the cold buckets on slower SAN space. My question is, What will happen when I start up splunk with this new rotation policy? Will splunk 6.3 chock when trying to move 6.5 TB of data from a fast SAN to a Slower SAN? I have been asked to do this as a cost saving to the service.

Why is my index not being populated with my current configuration?

$
0
0
We've been chugging along fine with our 4 unreplicated indexers. I'd like to add a new index now, but have gotten stuck. This app is successfully deployed from the deployment server: /opt/splunkforwarder/etc/apps/throwaway_app/ cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/inputs.conf [script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh] disabled = false index = throwaway # Run every 15 minutes interval = 900 source = throwaway_top sourcetype = script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh [monitor:///opt/logfile.log] index = throwaway disabled = false sourcetype = throwaway.logfile cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh #!/bin/bash #top -n 1 | grep splun[k] | awk '{print $3" "$6" "$7}' ps -ef This is added to the end of a well used indexes.conf file and is successfully deployed to the indexers: [throwaway] homePath = volume:primary/throwaway coldPath = volume:primary/throwaway/colddb thawedPath = $SPLUNK_DB/throwaway/thaweddb tstatsHomePath = volume:primary/throwaway/datamodel_summary summaryHomePath = volume:primary/throwaway/summary maxMemMB = 20 maxHotBuckets = 10 maxConcurrentOptimizes = 6 maxTotalDataSizeMB = 4294967295 maxWarmDBCount = 9999999 maxDataSize = auto_high_volume The throwaway index is recognized and listed in this search with the settings I have put in indexes.conf | eventcount summarize=false index=* | dedup index | fields index As mentioned above, data is not aggregating in the new index either when I search or when I look fora folder. I thought that new data would force the creation of the index folder structure, but nothing is getting created. I may be under the false impression that since we are not replicating data we are not using a master. I've been reading through the docs, but everything seems to point to clustered (replicated?) indexers, which I don't have. Can someone help?

Attempting to run Splunk, why am I getting "Problem parsing indexes.conf: Cannot create index 3rdIndex: path of homePath must be absolute"?

$
0
0
[volume:primary] path = opt/splunk/splunk_data maxVolumeDataSizeMB = 2000000 [3rdIndex] homePath = volume:primary/3rdIndex/db coldPath = volume:cold/3rdIndex/colddb thawedPath = $SPLUNK_DB/3rdIndex/thaweddb maxDataSize = auto_high_volume When attempting to run splunk, it results in the message: Problem parsing indexes.conf: Cannot create index 3rdIndex: path of homePath must be absolute ('opt/splunk/splunk_data/3rdIndex/db') What's strange is that I have other indexers with the same stanzas in indexes.conf, except that on those, the volume definitions were split to a separate indexes.conf. Any ideas?

How to Not apply cluster-bundle

$
0
0
Hi, i am installing two new indexers for test, as test indexers they have very small disks. As clustermember they get indexes.conf from the cluster-bundle, where _internal is set to keep logs for a year. _internal configuration is made for my production splunk servers, how can i bypass the cluster bundle on the tests servers ? Regards, Edouard Alias

How limit my index growth

$
0
0
Hi Splunk Users, I am having an issue with my indexes growing very large and clogging up the space on my disk. For example: I have noticed the index 'perfmon' getting very large so I went ahead and set the limit to 5 GB. I was reading once the limit is reached it would clean up automically and delete older data. However I see in Fire Brigade that the index size is still 25 GB. How can that be if I limited to be 5 GB? Thank you, Oliver

What is the proper indexes.conf syntax for moving colddb to another volume?

$
0
0
I would just like to confirm my syntax... I've read a bunch of postings, I've RTFM, but none have an actual sample or example of what's needed in the int (that I've found). I've stopped Splunk. Moved the colddb from old location to new location. Edited `/opt/splunk/etc/system/local/indexes.conf` and added the following: coldPath = /newspace/colddb coldPath.maxDataSizeMB = 409600 And restarted Splunk. That's it, in a nutshell, right? (assuming a 500 gig drive). Please let me know if that 's wrong... thanks! Mike

After upgrading to Splunk 6.3, why am I getting a "Found stanza=_blocksignature in indexes.conf" error and am unable to find this stanza running btool?

$
0
0
I'm getting the following error after I upgraded to Splunk 6.3: Search peer Splunk has the following message: Found stanza=_blocksignature in indexes.conf. The block-signing feature is no longer available in Splunk. Please remove stanza=[_blocksignature] from the indexes.conf. For further details, please refer to the related topic in the latest version of 'Securing Splunk' manual on docs.splunk.com. After running the btool command, I'm still unable to find this stanza in indexes.conf. Has anyone seen this issue before?

After editing Indexes.conf: Problem parsing indexes.conf: stanza=_audit Required parameter=tstatsHomePath not configured

$
0
0
I was receiving the following messages on my search head, coming from one of my search peers: Search peer has the following message: blockSignSize defined in indexes.conf. The block-signing feature is no longer available in Splunk. Please remove all blockSignSize and blockSignatureDatabase (if present) keys from the indexes.conf. For further details, please refer to the related topic in the latest version of 'Securing Splunk' manual on docs.splunk.com. Search peer has the following message: Found stanza=_blocksignature in indexes.conf. The block-signing feature is no longer available in Splunk. Please remove stanza=[_blocksignature] from the indexes.conf. For further details, please refer to the related topic in the latest version of 'Securing Splunk' manual on docs.splunk.com. So I went into /opt/splunk/etc/system/local on my search peer and removed the references to blockSignSize and blockSignatureDatabase, as well as the _blocksignature stanza. I then restarted splunkd. However, splunkd won't come up now. When I try to start splunkd, I now get the following error: Problem parsing indexes.conf: stanza=_audit Required parameter=tstatsHomePath not configured Validating databases (splunkd validatedb) failed with code '1'. Any idea what has caused this to happen?

Is the Active Directory group name specified in authentication.conf case sensitive, and what will happen if we have 2 indexes with the same name in indexes.conf?

$
0
0
Hi Fellow Splunkers, I have two questions: 1) Is the Active Directory group name specified in authentication.conf case sensitive? I mean, do we have to specify the same name that is used to create the group on the AD server? 2) What will happen if we specify same index name and related config in indexes.conf file on the cluster master and run the `splunk apply cluster-bundle` command? Thanks in advance

Will an index be allowed to grow beyond max size if frozenTimePeriodInSecs is set, but not met?

$
0
0
We're losing data to the frozen directory pre-maturely. We have requirements to keep data searchable for 5 years, but had left the MaxIndexSize at the default 500,000 MB and have now reached that limit earlier than expected. We have a coldtofrozen path specified, so our data is safe there, but just not searchable. I have an open ticket to address an entire solution, but in the near term would like to stop the data from rolling to frozen. If I set frozenTimePeriodInSecs for the index in question in indexes.conf, what behavior can I expect given that the index is already at max size? Will it have the effect I'm hoping for and simply allow the index to grow without regard to the 500,000 MB limit until such time as records meet the frozenTimePeriodInSecs value and can thus roll to frozen? thanks for any advice. Michael

How to show a deployed index in Splunk Web on a search head to add data?

$
0
0
Hi, We are using a Splunk Enterprise installation that uses the following: 1 search head, also acts as a deployment server and license manager. 1 indexer, with no gui. I have created a deployment app on the Search head called test-indexes. It contains a /test-indexes/default/**indexes.conf** In **indexes.conf** I have created an index called [test] with the default bucket paths, maxdatasize and maxtotaldatasize attributes. The index has been deployed on the indexer, and is visible in opt/splunk/var/lib/splunk directory. both in test.dat and test directory. **My issue** is that even though the index is deployed, there is no way for me to be able to add data to the index from the search head. It does not exist in the **settings->indexes** view in Splunk Web (search head). How can I resolve this issue? // Daniel

Why do I get "Invalid key in stanza [tcp-ssl://:1470] ... connection_host=dns your indexes and inputs are not internally consistent"?

$
0
0
Hello, Our `/opt/splunk/etc/apps/search/local/inputs.conf` file on our forwarder contains: [tcp-ssl://:1470] connection_host=dns sourcetype=apm_log index=security_logs queueSize=5MB When starting the forwarder, I get: checking for conf file problems:... invalid key in stanza [tcp-ssl://:1470] in /opt/splunk/etc/apps/search/local/inputs.conf ...connection_host=dns your indexes and inputs are not internally consistent. btool output offers no additional information. Can anyone offer advice? Thank you so much. msantich

How do I configure a data retention policy and a working script for my indexes?

$
0
0
Hi, I want to create a data retention policy for my all indexes, but I don't know how to configure this: - coldToFrozenDir = "<path to frozen archive>" - coldToFrozenScript =["<path to program that runs script>"] "<path to script>" But how do we add this in the indexes.conf file? Can somebody give me the idea of doing this? Thanks in advance.

Using the collect command in a scheduled search to add data to an index, why does each bucket only have 500MB of data with my current retention policy?

$
0
0
Hi all, I have a search that runs about every 20 minutes to merge a bunch of information together and make it easily accessible in a separate index. I do this using the `collect` command that Splunk provides. The search looks something like: {huge aggregation query that merges a bunch of logs together} | table _time RequestId UrlHit RoundTripDur ServerDur BrowserDur ... | collect index=requestIndex sourcetype=mySourceType addtime=true testmode=false I have a retention policy for the `requestIndex` index that says to make the `maxDataSize` over 5GB. Essentially, I want to make each bucket store a day's worth of data. When I look at the actual breakdown, however, I'm seeing multiple buckets a day with only ~500MB of data each. This is well below the value I set in indexes.conf (again, 5GB). Does this have something to do with the `collect` command? Is there anyway I can make it so that `requestIndex` respects my setting? Here is my stanza from indexes.conf. I have restarted Splunk to make sure this stanza took effect: [requestIndex] frozenTimePeriodInSecs = 63072000 maxDataSize = 50000 coldPath = F:\splunkIndex\CustomIndexes\requestIndex\colddb maxWarmDBCount = 730
Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>